diff --git a/Assets/akit.png b/Assets/akit.png
new file mode 100644
index 0000000..9b177cb
Binary files /dev/null and b/Assets/akit.png differ
diff --git a/Assets/bb.jpg b/Assets/bb.jpg
new file mode 100644
index 0000000..a96cb6a
Binary files /dev/null and b/Assets/bb.jpg differ
diff --git a/Assets/servo.webp b/Assets/servo.webp
new file mode 100644
index 0000000..f71de31
Binary files /dev/null and b/Assets/servo.webp differ
diff --git a/Assets/x44.webp b/Assets/x44.webp
new file mode 100644
index 0000000..39ce5d1
Binary files /dev/null and b/Assets/x44.webp differ
diff --git a/Docs/2_Architecture/2.2_ElectronicsCrashCourse.md b/Docs/2_Architecture/2.2_ElectronicsCrashCourse.md
index 9a6e575..a634819 100644
--- a/Docs/2_Architecture/2.2_ElectronicsCrashCourse.md
+++ b/Docs/2_Architecture/2.2_ElectronicsCrashCourse.md
@@ -4,14 +4,14 @@
### Electrical system diagram
-
+
This diagram shows many of the electrical components found on a typical FRC robot.
You don't need to memorize this layout, but its handy to have this as a reference.
### Battery
-
+
All of the power our robot uses comes from a single 12-volt car battery.
These can be found on our battery cart in the corner of the lab.
@@ -38,7 +38,7 @@ $V = IR$ is the equation which governs voltage sag, where $V$ is amount the volt
### Main Breaker
-
+
The main breaker is the power switch of the robot.
The red button will turn the robot off, and a small black lever on the side of the breaker will turn it on.
@@ -53,7 +53,7 @@ We tend to have a 3d-printed guard over the off switch to prevent accidental pre
### Power Distribution Hub (PDH)
-
+
The PDH takes the power from the battery and distributes it to the motors, sensors, and other components on the robot (Almost like its a hub for distributing power!).
We don't have to do anything with the PDH in code, but if a motor or sensor does not have power it could have a bad connection with the PDH.
@@ -69,7 +69,7 @@ The PDH is also often at one end of our CAN network. What's CAN? Glad you asked
### The CAN Network
-
+
CAN is a type of communication along wires that allow our sensors and motors to communicate along our robot.
In FRC, CAN is transmitted over yellow and green pairs of cables.
@@ -86,7 +86,7 @@ This app also has features to test and configure devices on the CAN network.
### CANivore
-
+
The CANivore is a device that connects to the [Rio](#roborio-2) (in the next section) over usb.
It allows us to have a second CAN network with increased bandwidth.
@@ -97,7 +97,7 @@ In 2024 we exclusively used the CANivore network for motors and sensors.
### RoboRIO 2
-
+
The RoboRIO (rio) is the brain of the robot.
It is built around a computer running Linux with a large number of Input/Output (IO) ports.
@@ -114,7 +114,7 @@ These ports include:
- The large set of pins in the middle of the Rio is the MXP port.
MXP (and the SPI port in the top-right corner) is used for communication over serial interfaces such as I²C protocal.
Unfortunately, there is an issue with I²C that can cause the code to lock up when it is used, so we avoid using the serial ports.
- We can get around this issue by using a coprocessor (computer other than the rio, like a raspberry pi) to convert the signal to a different protocal.
+ We can get around this issue by using a coprocessor (computer other than the rio, like a raspberry pi) to convert the signal to a different protocol.
Generally we avoid using I²C devices.
- A CAN network originates at the RIO.
- Several USB ports are available on the Rio.
@@ -126,13 +126,19 @@ The Rio also has an SD card as its memory.
When we set up a Rio for the season we need to image it using a tool that comes with the driver station.
The WPILib docs have [instructions](https://docs.wpilib.org/en/stable/docs/zero-to-robot/step-3/roborio2-imaging.html) on how to image the SD card.
+### Note on 2026-7+ Control System
+As per [this](https://community.firstinspires.org/introducing-the-future-mobile-robot-controller) blog post, FRC will use a different robot control system called SystemCore beginning with the 2027 season
+This moves away from the RoboRIO and NI and instead will use a controller based on the Raspberry Pi CM5.
+As we have not been selected for beta testing at this time, we don't have a lot of information about this right now, but it's good to keep this on your radar.
+**(Leads should remember to update this part of the article in the fall of 2026!)**
+
### Vivid Hosting Radio
-
+
-The radio is essentially a Wi-Fi router that connects the robot to the driver station.
+The VH-109 radio is essentially a Wi-Fi router that lives on the robot and connects it to the driver station.
At tournaments we have to take the radio to get reprogrammed for that competition.
-This makes it able to connect to the field so that our robot gets enabled and disabled at the right times during matches, however it prevents us from connecting to the robot with a laptop wirelessly$`^1`$.
+This makes it able to connect to the access point (another radio on the field) so that our robot gets enabled and disabled at the right times during matches, however it prevents us from connecting to the robot with a laptop wirelessly $`^1`$.
The radio has four ethernet ports and a pair of terminals for power wires.
One ethernet port connects to the rio and is labeled RIO.
One is usually reserved for tethering to a laptop and is labeled DS.
@@ -143,30 +149,22 @@ Network switches and other ethernet devices are plugged into the AUX1 and AUX2 p
After each competition we have to reimage the radio to allow it to connect to a laptop wirelessly again.
Refer to the [vivid hosting radio documentation](https://frc-radio.vivid-hosting.net/) for more information.
-The radio can also be connected to via a second radio acting as an access point.
-At time of writing we have not tried this, and best practices are still being figured out.
+We can also connect to the robot's VH-109 by connecting to a second network (which comes from a second radio that's connected to the robot).
+This radio could be the VH-113 access point, which is a larger radio that is for the field instead of going on a robot, or a VH-109 configured to act like a VH-113.
+This is the setup we'll usually be using at the Loom.
The radio can either be powered using Power-Over-Ethernet (PoE) or the power terminals.
-This radio model is new at time of writing$`^2`$, and best practices are still being figured out.
Be careful with checking for good, consistent radio power if you are having connection issues.
-Footnote $1$
-
-At Chezy Champs 2023 we tested a beta version of this radio, and were able to connect to it wirelessly in the pit on a second network.
-This did have stability issues.
-It is unknown if and when this capability will be re-enabled.
-
-Footnote $2$
-
An older radio known as the OM5P was in use until champs 2024, and you may encounter some on old/offseason robots.
It was much worse to deal with (more fragile and finicky) and we are lucky to be done with it.
It is pictured below.
-
+
### Motor Controllers
-
+
Motors are the primary form of movement on the robot, but on their own they are just expensive paperweights.
Motor controllers take signals from our code, often over CAN, and turn them into the voltage sent to the motor.
@@ -177,30 +175,35 @@ For instance they might be able to run PID loops much faster than our code is ab
Knowing what motor controller you are using and what features it has is very important when writing robot code.
Pictured above is the Spark Max built by REV Robotics, a common modern motor controller often used with the NEO and NEO 550 motors.
REV also produces a motor called the NEO Vortex which uses the Spark Flex, pictured below.
-
+
+
However, we avoid using REV motors due to poor software and historical mechanical issues.
Instead we use . . .
-### The Talon FX + Kraken X60
+### The Talon FX + Kraken X60/44
-
+
The Kraken X60 ("kraken") motor is the primary motor we use on the robot.
Unlike many other motors, krakens come with a built in motor controller called the Talon FX.
The kraken also has a built in encoder, or sensor that tracks the rotation and speed of the motor.
Documentation for the software to control falcons can be found [here](https://pro.docs.ctr-electronics.com/en/stable/).
+There is also the Kraken X44, which is mostly the same as the X60 but a bit smaller and lighter.
+It looks the same in code to us, since it also has a TalonFX controller.
+
+
We also use the Falcon 500 ("falcon") motor in some places on the robot.
Slightly less powerful, slightly older, and likely out of stock for the forseeable future, falcons are slowly being phased out of our motor stock.
Because falcons also have an integrated TalonFX, they behave exactly the same in code as krakens.
A falcon is pictured below.
-
+
### Solenoids and Pneumatics
-
+
Pneumatics are useful for simple, linear, and repeatable motions where motors might not be ideal.
In the past we have used them primarily for extending intakes.
@@ -210,9 +213,17 @@ We usually use double solenoids, which have both powered extension and retractio
Single solenoids only supply air for extension, and rely on the piston having a spring or something similar to retract them.
Our mechanical team has moved somewhat away from pneumatics over weight and complexity concerns, but they may still appear on future robots.
+### Servos
+
+
+
+Servos are similar to motors in that they can produce rotational motion, but are designed for very precise rotation.
+Servos are somewhat more common in FTC than FRC, but are good for "single use" mechanisms or something that needs to go to a specific position.
+In 2025, we used 2 servos to open the funnel hatch panel in order to climb.
+
### Robot Status Light
-
+
The Robot Status Light (RSL) is an orange light we are required to have on our robot by competition rules.
When the robot is powered on it will glow solid orange.
@@ -226,8 +237,8 @@ We have additional LEDs on the robot that indicate the state of the robot, but t
### Encoders
-
-
+
+
Encoders are devices that measure rotation.
We use them to measure the angles of our swerve modules, the speed of our wheels, the position of our elevator, and much more.
@@ -251,7 +262,7 @@ Some teams have seen success getting around the absolute encoder only having one
### Limit Switches
-
+
A limit switch is a simple sensor that tells us whether a switch is pressed or not.
They are one of the electrically simplest sensors we can use, and are quite useful as a way to reset the encoder on a mechanism ("Zero" the mechanism).
@@ -268,21 +279,34 @@ This tells us that we are at the hardstop in the same way that a limit switch wo
### IMU
-
+
An Inertial Measurement Unit (IMU or sometimes Gyro) is a sensor that measures its acceleration and rotation.
We primarily use them to measure the rotation of the robot (heading) so that the robot knows which way it is pointing on the field.
This lets us run field-relative swerve drive.
Pictured above is the Pigeon 2.0 IMU by CTRE, an IMU that connects over the CAN network.
+### Beambreaks/Light Sensors
+
+
+
+A beambreak has two parts: an emitter, which sends out a beam of infrared light, and a receiver, which is directly across from the emitter and receives that light.
+These are useful for when we need to detect if something like a game piece is present or not.
+If we mount the two parts on opposite sides of a mechanism, the game piece will block the light from reaching the other side and thus tell us that it's there.
+
+We might use an IR reflective sensor if we don't want to or can't add the receiver part, but this works in the same way by detecting when something's blocked the beam of light coming from the emitter.
+
### Cameras and Vision
-
+
Cameras and vision systems are an important part of advanced FRC software.
-Vision in FRC can be used to detect visual markers called apriltags on the field, game pieces, and even other robots.
-The pictured camera is a Limelight camera, a purchaseable vision solution that we used from 2021-2023.
-Limelights connect to the robot over ethernet.
-However they are generally not built for apriltag detection and pose estimation, which has pushed us towards other solutions.
-These generally involve more custom hardware, such as [arducam cameras](https://www.arducam.com/product/arducam-100fps-global-shutter-usb-camera-board-1mp-720p-ov9281-uvc-webcam-module-with-low-distortion-m12-lens-without-microphones-for-computer-laptop-android-device-and-raspberry-pi/), [orange pi processors](http://www.orangepi.org/), and [PhotonVision software](https://photonvision.org/).
-We used that hardware and software to success in 2024 for pose estimation with apriltags.
+Vision in FRC can be used to detect visual markers called AprilTags on the field, game pieces, and even other robots.
+The pictured camera is a Limelight camera, a purchaseable vision solution that connects to the robot over ethernet and we used from 2021-2023.
+
+However, we've moved towards custom hardware, such as [Arducam cameras](https://www.arducam.com/product/arducam-100fps-global-shutter-usb-camera-board-1mp-720p-ov9281-uvc-webcam-module-with-low-distortion-m12-lens-without-microphones-for-computer-laptop-android-device-and-raspberry-pi/) and [Orange Pi processors](http://www.orangepi.org/).
+See the [Vision](..\3_Specifics\3.5_Vision.md) article for more details.
+The cameras plug into the USB ports on the Orange Pi.
+The Orange Pi connects to the radio over Ethernet,
+It's powered off the PDH with a buck/step down converter (which decreases voltage and increases current, since the Orange Pi only takes 5V and not 12).
+If you're having issues, check the PhotonVision docs pages on [networking](https://docs.photonvision.org/en/latest/docs/quick-start/networking.html) and [wiring](https://docs.photonvision.org/en/latest/docs/quick-start/wiring.html).
\ No newline at end of file
diff --git a/Docs/2_Architecture/2.3_CommandBased.md b/Docs/2_Architecture/2.3_CommandBased.md
index 484a7c4..93779f7 100644
--- a/Docs/2_Architecture/2.3_CommandBased.md
+++ b/Docs/2_Architecture/2.3_CommandBased.md
@@ -4,17 +4,17 @@
Command based programming revolves around three concepts: **Subsystems**, **Commands**, and **Triggers**.
-A Subsystem is a set of hardware that forms one system on our robot, like the drivetrain, elevator, arm, or intake.
-Each subsystem contains some associated hardware (motors, pistons, sensors, etc.) They are the "nouns" of our robot, what it is.
-Each Subsystem is generally made to contain a broad set of hardware that will always operate as a unit.
+### Subsystems
+A Subsystem represents a system on our robot, like the drivetrain, elevator, arm, or intake, that will always operate as a unit.
+Each subsystem contains some associated hardware (motors, pistons, sensors, etc).
+They are the "nouns" of our robot, or things that it is.
+### Commands
Commands are the "verbs" of the robot, or what our robot does.
Each Subsystem can be used by one Command at the same time, but Commands may use many Subsystems.
-Commands can be composed together, so the `LineUp`, `Extend`, and `Outake` Commands might be put together to make a `Score` Command.
+Commands can be composed together, so the `LineUp`, `Extend`, and `Outtake` Commands might be put together to make a `Score` Command.
Because each Subsystem can only be used by one Command at once, we are safe from multiple pieces of code trying to command the same motor to different speeds, for example.
-Subsystems are ways to organize resources that can be used by one Command at a time.
-
Some hardware might not be stored in a Subsystem if multiple things can/should use it at the same time safely.
For example, a vision setup can be read from by many things, and might not need to be locked by Commands.
Therefore, it might not be stored in a Subsystem.
@@ -22,29 +22,28 @@ Therefore, it might not be stored in a Subsystem.
On the other hand, a roller that is driven by a motor can only go at one speed at a time.
Therefore, we would wrap it in a Subsystem so that only one Command can use it at once.
+### Triggers
A Trigger is something which can start a Command.
The classic form of this is a button on the driver's controller.
-Another common type is one which checks for when the robot enables.
-One non-obvious Trigger we used in 2024 was one which checked when we had detected a game piece in the robot, which we used to flash our LEDs and vibrate the driver controller.
-Triggers can be made of any function that returns a boolean which makes them very powerful.
+Another common type is one which checks if the robot is enabled.
+One non-obvious Trigger we used in 2024 was one which checked when we had detected a game piece in the robot, which then triggered the Commands to flash our LEDs and vibrate the driver controller.
+Triggers can be made of any function that returns a boolean, which makes them very powerful, and can be composed together with boolean operators.
+Triggers can also be bound to a certain activation event.
+For example, we might want something to happen while a condition is true (`whileTrue()`), when a condition changes from false to true (`onTrue()`), or is true for some amount of time (`debounce()`).
+
Some large Commands are better represented by several Commands and some Triggers!
-# update with superstructure stuff later
+Subsystems, commands, and triggers are the building blocks of the robot's overall "superstructure".
+This will be covered in more detail in the [Superstructure article.](2.10_Superstructure.md)
### Resources
-- [WPILib intro to functional programming](https://docs.wpilib.org/en/stable/docs/software/basic-programming/functions-as-data.html).
- Read through this article on lambda expressions and functional programming if you haven't already.
-- [WPILib docs](https://docs.wpilib.org/en/stable/docs/software/commandbased/index.html).
- Read through these docs until you finish "Organizing Command-Based Robot Projects"
+- Read through [this article](https://docs.wpilib.org/en/stable/docs/software/basic-programming/functions-as-data.html) on lambda expressions and functional programming if you haven't already.
+- Read through [these docs](https://docs.wpilib.org/en/stable/docs/software/commandbased/index.html) until you finish "Organizing Command-Based Robot Projects"
OR watch [this video](https://drive.google.com/file/d/1ykFDfXVYk27aHlXYKTAqtj1U2T80Szdj/view?usp=drive_link).
- Presentation notes for the video are [here](CommandBasedPresentationNotes.md).
- If you watch the video, it is recommended to also read the [Subsystems](https://docs.wpilib.org/en/stable/docs/software/commandbased/subsystems.html), [Binding Commands to Triggers](https://docs.wpilib.org/en/stable/docs/software/commandbased/binding-commands-to-triggers.html), and [Organizing Command-Based Robot Projects](https://docs.wpilib.org/en/stable/docs/software/commandbased/organizing-command-based.html#) for addition details on using Command-Based.
-
- The important segment of all of this to remember is:
- > Commands represent actions the robot can take. Commands run when scheduled, until they are interrupted or their end condition is met. Commands are very recursively composable: commands can be composed to accomplish more-complicated tasks. See Commands for more info.
- >
- > Subsystems represent independently-controlled collections of robot hardware (such as motor controllers, sensors, pneumatic actuators, etc.) that operate together. Subsystems back the resource-management system of command-based: only one command can use a given subsystem at the same time. Subsystems allow users to “hide” the internal complexity of their actual hardware from the rest of their code - this both simplifies the rest of the robot code, and allows changes to the internal details of a subsystem’s hardware without also changing the rest of the robot code.
+ Presentation notes for the video are [here](2.4_CommandBasedPresentationNotes.md).
+
+- If you watch the video, it is recommended to also read the [Subsystems](https://docs.wpilib.org/en/stable/docs/software/commandbased/subsystems.html), [Binding Commands to Triggers](https://docs.wpilib.org/en/stable/docs/software/commandbased/binding-commands-to-triggers.html), and [Organizing Command-Based Robot Projects](https://docs.wpilib.org/en/stable/docs/software/commandbased/organizing-command-based.html#) for additional details on using Command-Based.
### Examples
@@ -53,12 +52,12 @@ Some large Commands are better represented by several Commands and some Triggers
### Exercises
-- Make basic KitBot code using the Command-Based skeleton. You can follow [this](KitbotExampleWalkthrough.md) tutorial.
+- Make basic KitBot code using the Command-Based skeleton. You can follow [this](2.5_KitbotExampleWalkthrough.md) tutorial.
### Notes
- We prefer making simple Commands with Command factories, or methods in a subsystem that return a Command.
These methods should be simple interactions like `setTargetExtensionMeters()` or `extendIntake()`.
- Then you can use decorators as described [here](https://docs.wpilib.org/en/stable/docs/software/commandbased/command-compositions.html) to compose the basic Commands into more complex sequences.
- Generally we make these compositions in `Robot` and `Superstructure` but you can also make single-Subsystem compositions within that Subsystem.
+ Then, you can use decorators as described [here](https://docs.wpilib.org/en/stable/docs/software/commandbased/command-compositions.html) to compose the basic Commands into more complex sequences.
+ Generally, we make these compositions in `Robot` and `Superstructure`, but you can also make single-Subsystem compositions within that Subsystem.
See our code from previous years for examples of this pattern, or talk to a software lead.
diff --git a/Docs/2_Architecture/2.6_AdvantageKit.md b/Docs/2_Architecture/2.6_AdvantageKit.md
index 70a16a4..34e8803 100644
--- a/Docs/2_Architecture/2.6_AdvantageKit.md
+++ b/Docs/2_Architecture/2.6_AdvantageKit.md
@@ -4,23 +4,29 @@
### What is logging?
-Logging is recording some or all of the state (think the current values of variables, inputs and outputs, and what Commands are running,) of the robot so that we can play it back later.
+Logging is recording some or all of the state (such as the current values of variables, inputs and outputs, and what Commands are running) of the robot so that we can play it back later.
### Why log?
-Logging helps with debugging by letting us see exactly what the robot was doing when it broke.
-This is especially useful at competition when we have limited time and testing ability to diagnose problems.
+Logging helps with debugging by letting us see exactly what was happening to the robot and what it was doing when it broke.
+This is especially useful at competition, when we have limited time and testing ability to diagnose problems.
For instance, at Houston 2023 we had an issue in our second quals match where our grabber stopped responding to input.
Using the logs of that match, we saw that the sensor readings of the grabber had stopped responding, which suggested that the CAN bus to the grabber had broken.
### Why AdvantageKit?
-AdvantageKit logs every input and output to and from the robot, so that we can perfectly recreate the state of the robot in the log or with a simulator.
-Logging everything means we never have to say "if only we had logged one more thing!" It also means that we can simulate how the code might have responded differently with changes.
-One of the examples 6328 uses is when they adjusted the way they tracked vision targets based on a match log that revealed a problem, then used the log replay to confirm that the change fixed the vision issue.
-AdvantageKit is a mature and developed framework for this type of logging, and has been used on Einstein by 6328 and 3476.
-The framework should continue to be maintained for the forseeable future and by using the framework instead of a custom solution we reduce our overhead for using a system like this.
-AdvantageKit is closely integrated with AdvantageScope, a log and sim viewer built by 6328
+AdvantageKit logs every input and output to and from the robot.
+This means we can perfectly recreate the state of the robot in the log or with a simulator.
+
+
+
+It also means that we can run the same inputs through modified code, and simulate how the robot would have responded.
+AdvantageKit calls this replay.
+One of the examples 6328 uses is when they used a log to diagnose an issue with the way they tracked vision targets, adjusted it, then used replay to confirm that the change would have produced the correct outputs with the same inputs.
+
+AdvantageKit is a mature and developed framework for this type of logging that should continue to be maintained for the foreseeable future.
+
+AdvantageKit is closely integrated with AdvantageScope, a log and sim viewer built by 6328.
### Drawbacks
@@ -28,12 +34,12 @@ Running this amount of logging has performance overhead on the rio, using valuab
Logging also requires a non-insignificant architecture change to our codebase by using an IO layer under each of our subsystems.
While this does require some additional effort to write subsystems, it also makes simulating subsystems easier so it is a worthwhile tradeoff.
-We have not yet done significant on-robot r&d with AdvantageKit and need to assess the performance impacts of using it.
+8033-specific usage of AdvantageKit features will be covered in more detail in the next couple of lessons.
### Resources
+- [AdvantageKit docs](https://docs.advantagekit.org/)
- [AdvantageKit repository](https://github.com/Mechanical-Advantage/AdvantageKit)
- - See the README for this repo for docs
- [AdvantageScope log viewer](https://github.com/Mechanical-Advantage/AdvantageScope)
- [6328 logging talk](https://youtu.be/mmNJjKJG8mw)
@@ -41,14 +47,15 @@ We have not yet done significant on-robot r&d with AdvantageKit and need to asse
- [6328 2023 code](https://github.com/Mechanical-Advantage/RobotCode2023)
- [3476 2023 code](https://github.com/FRC3476/FRC-2023)
-- [8033 2023 AdvantageKit port](https://github.com/HighlanderRobotics/Charged-Up/tree/advantagekit) [LINK DOWN]
+- [8033 2025 code](https://github.com/HighlanderRobotics/Reefscape)
### Exercises
-- Install AdvantageKit into your kitbot project following [this tutorial](https://github.com/Mechanical-Advantage/AdvantageKit/blob/45d8067b336c7693e63ee01cdeff0e5ddf50b92d/docs/INSTALLATION.md).
- You do not need to modify the subsystem file yet, we will do that as part of the simulation tutorial.
+- Install AdvantageKit into your kitbot project following [this tutorial](https://docs.advantagekit.org/getting-started/installation/existing-projects).
+ You do not need to add the suggested block in the `Robot()` constructor.
+ We will do that as part of the simulation tutorial.
### Notes
-- See also the [Simulation](Simulation.md) article for more on the IO layer structure
-- _When we have log files, put a link to one here as an example_
+- See the [AdvantageKit Structure Reference](2.7_AKitStructureReference.md) article for more on the IO layer structure
+- [Here](https://drive.google.com/drive/folders/1qNMZ7aYOGI31dQNAwt7rhFo97NR7mxtr) are some logs from our 2023-2024 season
diff --git a/Docs/2_Architecture/2.9_KitbotExampleWalkthroughSim.md b/Docs/2_Architecture/2.9_KitbotExampleWalkthroughSim.md
index 190740d..a6667e4 100644
--- a/Docs/2_Architecture/2.9_KitbotExampleWalkthroughSim.md
+++ b/Docs/2_Architecture/2.9_KitbotExampleWalkthroughSim.md
@@ -18,7 +18,7 @@ The first step of moving our standard Command-based code to a loggable, simulate
Luckily AdvantageKit has a handy guide on how to install it on an existing code base.
Follow [this walkthrough](https://github.com/Mechanical-Advantage/AdvantageKit/blob/main/docs/INSTALLATION.md).
Follow all the steps in the doc through adding the `@AutoLog` annotation.
-You do not need to add the suggested block in `robotInit()`, instead use the one below.
+You do not need to add the suggested block in `Robot()`, instead use the one below.
```Java
Logger.getInstance().recordMetadata("ProjectName", "KitbotExample"); // Set a metadata value
diff --git a/Docs/3_Specifics/3.1_ControlsIntro.md b/Docs/3_Specifics/3.1_ControlsIntro.md
index 46034f5..747efbf 100644
--- a/Docs/3_Specifics/3.1_ControlsIntro.md
+++ b/Docs/3_Specifics/3.1_ControlsIntro.md
@@ -16,21 +16,21 @@ Choosing a control strategy is an important takeaway from these articles.
Generally we have a pretty direct tradeoff between time/effort spent on controls and precision in our output (as well as how fast the mechanism gets to that precise output!)
Loosely speaking, there are 3 'levels' of control that we tend to use for our mechanisms.
-1. Is simple feedforward control.
+### 1. Simple feedforward control.
Really, this is only feedforward in the loosest sense, where we set a DutyCycle or Voltage to the motor.
This sort of control doesn't set the exact speed of the output, and is really only useful on mechanisms that just need some rotation to work, but not anything precise.
You'll usually see this on intakes or routing wheels where we just want to pull a game piece into or through the robot.
- The [Intake Subsystem](https://github.com/HighlanderRobotics/Charged-Up/blob/main/src/main/java/frc/robot/subsystems/IntakeSubsystem.java) from 2023 is an example of this.
+ The [Roller Subsystem](https://github.com/HighlanderRobotics/Reefscape/blob/83a11f1bb1dfd20c04c3dc6a0e548773f11dfc58/src/main/java/frc/robot/subsystems/roller/RollerSubsystem.java#L45) and [RollerIOReal class](https://github.com/HighlanderRobotics/Reefscape/blob/83a11f1bb1dfd20c04c3dc6a0e548773f11dfc58/src/main/java/frc/robot/subsystems/roller/RollerIOReal.java#L99) from 2025 is an example of this.
-2. Is simple feedback control.
+### 2. Simple feedback control.
We tend to use this on mechanisms that we might want to use sensible units for (like rotations per minute, or degrees) instead of an arbitrary output but don't need huge amounts of precision or are so overpowered compared to the forces on them that we can ignore outside forces.
Often we use this by calling a TalonFX's Position or Velocity control modes.
- [The angle motor on a swerve module](https://github.com/HighlanderRobotics/Charged-Up/blob/main/src/main/java/frc/robot/SwerveModule.java) is an example of this sort of control, and works because the Falcon 500 powering the module angle is so much stronger than the friction of the wheel with the ground.
+ [The angle motor on a swerve module](https://github.com/HighlanderRobotics/Reefscape/blob/83a11f1bb1dfd20c04c3dc6a0e548773f11dfc58/src/main/java/frc/robot/subsystems/swerve/ModuleIOReal.java#L203) is an example of this sort of control, and works because the Kraken X60 powering the module angle is so much stronger than the friction of the wheel with the ground.
-3. Is combined feedforward and feedback control.
+### 3. Combined feedforward and feedback control.
This is ideal for most situations where we desire precise control of a mechanism, and should be used on all primary mechanisms of a robot.
It tends to require more effort to tune and model than the previous levels, but is the correct way to control mechanisms.
- The [Elevator Subsystem](https://github.com/HighlanderRobotics/Charged-Up/blob/main/src/main/java/frc/robot/subsystems/ElevatorSubsystem.java) from 2023 is an example of this, specifically the `updatePID()` method which adds the results of a PID controller calculation with a feedforward controller calculation.
+ The [Elevator Subsystem](https://github.com/HighlanderRobotics/Charged-Up/blob/4475dfc7e07efa000e87597dddaac1e75c28a29c/src/main/java/frc/robot/subsystems/Elevator/ElevatorSubsystem.java#L49) from 2023 is an example of this, specifically the `updatePID()` method which adds the results of a PID controller calculation with a feedforward controller calculation.
Notice the use of a WPILib feedforward class here.
WPILib provides several classes that model common mechanisms.
[This article](https://docs.wpilib.org/en/stable/docs/software/advanced-controls/controllers/feedforward.html#feedforward-control-in-wpilib) goes over the classes in more detail.
diff --git a/Docs/3_Specifics/3.2_MotionProfiling.md b/Docs/3_Specifics/3.2_MotionProfiling.md
index c95c00f..7606e12 100644
--- a/Docs/3_Specifics/3.2_MotionProfiling.md
+++ b/Docs/3_Specifics/3.2_MotionProfiling.md
@@ -2,16 +2,17 @@
## Motion Profiles are a way to smoothly control a mechanism by combining PID and FF control
-As the [feedforward](Feedforward.md) article covered, it is often beneficial to combine feedforward and feedback (for our use, usually PID) control.
-One problem with this is that a feedforward for a dc motor (such as a falcon 500) controls _velocity_ while a mechanism might require _position_ control.
+As the [Controls](3.1_ControlsIntro.md) article covered, it is often beneficial to combine feedforward and feedback (for our use, usually PID) control.
+When we want to control the _position_ of a mechanism, however, we're limited by the fact that feedforward for a DC motor (such as a Kraken X60) controls _velocity_.
+
A motion profile smoothly interpolates, or transitions between, between a starting position and a setpoint position.
-There are a variety of types of motion profiles, but the one we use most often is called a _trapezoidal motion profile_.
-When using a trapezoidal motion profile, a mechanism will attempt to accelerate at a constant rate to some cruising speed, then deaccelerate to a standstill at the setpoint position.
+There are various types of motion profiles, but the one we use most often is called a _trapezoidal motion profile_.
+

This graph shows a motion profile over time, although the specific values aren't important.
The blue line is the position of the controller, the black line is the velocity of the controller and the red line is acceleration.
-You can see the 3 phases of the profile, with an accelerating phase at the start, a cruising phase through most of the profile, and a deaccelerating phase at the end.
+You can see the 3 phases of the profile, with an accelerating phase at the start (the first leg of the trapezoid), a cruising phase through most of the profile (the top base), and a deaccelerating phase at the end(the second leg).
The position line smoothly starts and stops, resulting in clean movement of the mechanism which minimizes wasted effort.
The real advantage of this is that now we have both a position and velocity setpoint at any given time, which means that the PID controller can adjust for disturbances in position while the feedforward controller can provide the majority of the control effort to get to the setpoint.
@@ -33,4 +34,4 @@ The real advantage of this is that now we have both a position and velocity setp
- Motion profiling is primarily used for position-controlled mechanisms like elevators and arms.
It can also be used on velocity controlled mechanisms, although it provides less of a benefit.
-- The `motion magic` control mode for a talon fx is really just running motion profiling on the motor controller.
+- The `MotionMagic` control mode for a TalonFX is really just running motion profiling on the motor controller.
diff --git a/Docs/3_Specifics/3.4_Choreo.md b/Docs/3_Specifics/3.4_Choreo.md
index 0dfb8d7..92437e7 100644
--- a/Docs/3_Specifics/3.4_Choreo.md
+++ b/Docs/3_Specifics/3.4_Choreo.md
@@ -84,7 +84,7 @@ They'll generally look like this:
}
```
Choreolib relies heavily on `Triggers`, which will not be discussed here.
-Refer to the [WPILib docs](https://docs.wpilib.org/en/stable/docs/software/commandbased/binding-commands-to-triggers.html) for more *or potentially another article here?*
+Refer to the [WPILib docs](https://docs.wpilib.org/en/stable/docs/software/commandbased/binding-commands-to-triggers.html) for more or the [Command Based article](../2_Architecture/2.3_CommandBased.md#Triggers)
You'll notice one of the parameters above is the `choreoDriveController()` method.
This is in `SwerveSubsystem`.
diff --git a/Docs/3_Specifics/3.5_Vision.md b/Docs/3_Specifics/3.5_Vision.md
index d420ded..ce24c5f 100644
--- a/Docs/3_Specifics/3.5_Vision.md
+++ b/Docs/3_Specifics/3.5_Vision.md
@@ -128,4 +128,4 @@ Some test data of other frequently used cameras is linked below in the Resources
### Examples
- [Photonlib examples](https://github.com/PhotonVision/photonvision/tree/master/photonlib-java-examples)
-- [8033 2024 implementation](https://github.com/HighlanderRobotics/Crescendo/tree/main/src/main/java/frc/robot/subsystems/vision)
\ No newline at end of file
+- ***update with fall 2025 refactor***
\ No newline at end of file