<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://skyfighter64.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://skyfighter64.github.io/" rel="alternate" type="text/html" /><updated>2026-03-20T13:38:28+00:00</updated><id>https://skyfighter64.github.io/feed.xml</id><title type="html">Skyfighter64’s Home Page</title><subtitle>Discover updates and summaries for some of my latest projects and ideas.</subtitle><entry><title type="html">Introduction to ALUP</title><link href="https://skyfighter64.github.io/2026/03/01/Introduction-To-ALUP.html" rel="alternate" type="text/html" title="Introduction to ALUP" /><published>2026-03-01T13:53:33+00:00</published><updated>2026-03-01T13:53:33+00:00</updated><id>https://skyfighter64.github.io/2026/03/01/Introduction-To-ALUP</id><content type="html" xml:base="https://skyfighter64.github.io/2026/03/01/Introduction-To-ALUP.html"><![CDATA[<h2 id="what-its-all-about">What it’s all about</h2>
<p>Many people know traditional LED strips, where you can change the color of all LEDs using a small Remote. For Addressable LED strips however, you get the possibility to change the color of every LED individually. To control the color of each LED however, one often either uses some kind of  Controller which only has some preprogrammed patterns, or a microcontroller which needs to be programmed manually and is limited in its performance and features.</p>

<p><img src="/media/alup/led_off.JPG" alt="Addressasble LEDs" /></p>

<p>But what if I want to do MORE than this?</p>

<p>What if I would like to use the computational power, connectivity and flexibility of a PC, Laptop, or other device?</p>

<p>This is where ALUP comes in. 
ALUP builds the bridge between the LEDs and a powerful Computer by using a Microcontroller to “translate” instructions Between Computer and LEDs.</p>

<p>More Specifically, it defines how the Computer and the Microcontroller have to communicate in order to successfully transfer commands directly to the LEDs.</p>

<p>This makes it possible for the computer to “tell” the LED strip things like:</p>
<ul>
  <li>“I want LED 5 to be Red”</li>
  <li>“Make all LEDs go dark”</li>
  <li>“The LEDs 5, 6 and 9 should switch to green in 1 second from now”</li>
</ul>

<p>By making this kind of communication possible, a whole new world of possibilities opens up. Users can now use the power of their computer for example to:</p>
<ul>
  <li>Create complex music visualization with lots of audio processing</li>
  <li>Synchronize multiple LED-Strips to show the same things at exactly the same time</li>
  <li>Light up their room depending on images shown on their computer screen</li>
  <li>Create light shows based on music videos
and so much more.</li>
</ul>

<p><img src="/media/alup/leds_on.JPG" alt="Addressasble LEDs" /></p>

<h2 id="where-to-learn-more">Where to learn more</h2>

<p>If you are interested in this project, I suggest checking out more posts here:</p>
<ul>
  <li><a href="/projects/alup">ALUP Project Page</a></li>
</ul>

<p>Also make sure to check out my <a href="https://github.com/Skyfighter64/">GitHub</a>, where all the code and documentation is hosted.</p>

<h2 id="github-projects">GitHub Projects</h2>
<ul>
  <li><a href="https://github.com/Skyfighter64/ALUP">ALUP-Protocol Definition</a> - Textual definition of the ALUP Communication Protocol itself</li>
  <li><a href="https://github.com/Skyfighter64/Python-ALUP">Python-ALUP</a> - Python Reference Implementation for an ALUP Sender</li>
  <li><a href="https://github.com/Skyfighter64/Arduino-ALUP">Arduino-ALUP</a> - Reference Implementation for an ALUP Receiver using the Arduino Framework</li>
  <li><a href="https://github.com/Skyfighter64/ALUP-Controller">ALUP-Controller</a> - Feature-Rich Command Line tool for interacting with an ALUP-Receiver</li>
  <li><a href="https://github.com/Skyfighter64/ALUP-Lightshow">ALUP-Lightshow</a> - Create time stamp-based light shows using ALUP</li>
  <li><a href="https://github.com/Skyfighter64/Audio-Visualizer">Audio-Visualizer</a> - Python-Based Audio Visualizer for Addressable LEDs on Raspberry Pi using <a href="https://github.com/karlstav/cava">CAVA</a></li>
</ul>]]></content><author><name></name></author><summary type="html"><![CDATA[What it’s all about Many people know traditional LED strips, where you can change the color of all LEDs using a small Remote. For Addressable LED strips however, you get the possibility to change the color of every LED individually. To control the color of each LED however, one often either uses some kind of Controller which only has some preprogrammed patterns, or a microcontroller which needs to be programmed manually and is limited in its performance and features.]]></summary></entry><entry><title type="html">Time Synchronization for the ALUP Protocol</title><link href="https://skyfighter64.github.io/timesync/2025/09/09/Time-Synchronization.html" rel="alternate" type="text/html" title="Time Synchronization for the ALUP Protocol" /><published>2025-09-09T22:14:49+00:00</published><updated>2025-09-09T22:14:49+00:00</updated><id>https://skyfighter64.github.io/timesync/2025/09/09/Time-Synchronization</id><content type="html" xml:base="https://skyfighter64.github.io/timesync/2025/09/09/Time-Synchronization.html"><![CDATA[<p><em>This post is part of a series about my <a href="https://github.com/Skyfighter64/ALUP">ALUP Protocol</a> side project.</em></p>

<h1 id="introduction">Introduction</h1>

<p>The ‘ALUP’ is an application layer protocol to communicate RGB-Data from a PC to a Microcontroller in order to light up addressable LED strips from devices with more computing power than an Arduino.</p>

<p>Originally it was only intended for the use with one single ALUP Receiver, but more recently I figured that it would be useful if multiple Receivers could be grouped together to update their LEDs synchronously. More specifically, I wanted to add timestamps to each Data Frame, which then tells the Microcontroller when exactly to update the LEDs. This requires time synchronization for both devices.</p>

<h1 id="background">Background</h1>
<p>A computer usually synchronizes it’s local time with accurate time servers over the internet. This makes its internal timekeeping sufficiently exact for everyday use. Most Microcontrollers however do not support this feature and are often not even connected to the internet at all. For them, time tracking is done by an internal timer which starts at 0 when they first boot up.</p>

<p>These cheap internal timers are often prone to some drift, where the internal clock counts either slightly faster or slower than the real time (usually ~ $3-4 \frac{s}{day}$).
&lt;!—
In order to synchronize the time between a PC and a Microcontroller, we need to:</p>
<ul>
  <li>Calculate the Offset of the Microcontroller’s clock to the real time</li>
  <li>Update the offset  often to minimize the inaccuracy through drift</li>
  <li>Integrate it into the existing protocol</li>
  <li>Make it work efficiently
–&gt;
    <h2 id="the-generic-precision-time-protocols-synchronization-mechanism">The generic Precision Time Protocol’s Synchronization Mechanism</h2>
  </li>
</ul>

<p>About half a year ago I was first introduced to the time synchronization mechanism of the generic Precision Time Protocol in university. It was fascinating for me how simple it was and how well it worked once I understood the principle behind it.</p>

<figure>
  <img src="https://upload.wikimedia.org/wikipedia/commons/4/48/Grundlegender_PTP-Nachrichtenaustausch.png" alt="PTP Time Synchronization Mechanism" />
  <figcaption>The gPTP Time Synchronization Mechanism (src: 
<a href="https://de.wikipedia.org/wiki/Precision_Time_Protocol#/media/Datei:Grundlegender_PTP-Nachrichtenaustausch.png">Wikipedia: PTP (German)</a>)</figcaption>
</figure>

<p>The setup is similar to our Microcontroller situation: A Master device which has the accurate time (master time domain), and a Slave device which has its own local time (slave time domain) synchronize their time using time measurements.</p>

<p>How it works:</p>
<ol>
  <li>The master device sends its accurate time $T$ to the slave. Because of the unknown transmission delay ( 10-100ms or more), the received time $T$ is outdated once it reaches the slave. For this reason, the Master notes the time it sent out the packet ($t_1$) and the sender notes the time it received the packet ($t_2$).</li>
  <li>The master transfers $t_1$ to the Slave in a follow up packet.</li>
  <li>The slave sends a packet to the master and notes the sending time $t_3$.</li>
  <li>The master notes the receiving time $t_4$ and responds to the delay packet by sending $t_4$ to the slave.</li>
</ol>

<p>The slave now has all time stamps $t_1, t_2, t_3, t_4$ and can calculate the transmission delay $\Delta_t$:</p>

\[\Delta_t = \frac{(t_2-t_1) + (t_4-t_3)}{2}\]

<p>With this, he can correct the previously exchanged Time $T$:</p>

\[T_{corrected} = T + \Delta_t\]

<h3 id="assumptions-and-caveats">Assumptions and Caveats</h3>
<p>This method of time-correction makes some non-trivial assumptions:</p>
<ul>
  <li>There should be no clock drift on either side</li>
  <li>Transmission latency of $T$ needs to be the the same as $\Delta_t$</li>
</ul>

<p>Which implies that:</p>
<ul>
  <li>Sending and receiving latencies are the same (symmetric)</li>
  <li>Sending latency and $T$’s transmission latency are the same</li>
</ul>

<p>These assumptions are the source of error in practical applications. Especially when working with wireless connections, the latency spikes caused by the layer 1 backoff-algorithms, like CSMA/CA in WiFi, can lead to inaccuracies. For this reason, we found that it is good practice to make multiple time synchronizations and take the median as a final result.</p>

<p>To reduce the impact of time drift, it is furthermore recommend to frequently repeat the synchronization Algorithm as often as needed.</p>

<h3 id="some-further-notes">Some further Notes:</h3>
<p>In conclusion, all that is done is:</p>
<ol>
  <li>Transmit the accurate time $T$ from the master to the slave</li>
  <li>Measure the transmission delay and correct $T$ on the slave</li>
</ol>

<p>Simple, right?</p>

<p>Also, one might think: “If the sending and receiving latencies need to be symmetric, why do we measure both and not only the sending latency?”.</p>

<p>It’s because we actually <strong>need</strong> the measurement in both directions due to the of the different time domains on the master and slave before synchronization (aka. we need to consider which clock measured which  time).</p>

<p>We can’t just subtract $t_2$ and $t_1$ because $t_1$ is in the master time domain and $t_2$ is in the slave time domain (The same goes for $t_4$ and $t_3$).</p>

<p>However, subtracting $t_4$ and $t_1$ as well as $t_3$ and $t_2$ is entirely valid, because the first two were both measured by the master’s clock and the second two by the slave’s. Calculating their difference therefore works just fine and eliminates the constant offset so we can calculate the average.</p>

<p>In formulas, it therefore would make more sense to rewrite:</p>

\[\Delta_t = \frac{(t_2-t_1) + (t_4-t_3)}{2}\]

<p>as:</p>

\[\Delta_t = \frac{(t_4-t_1)-(t_3-t_2)}{2}\]

<h1 id="adaptation-for-alup">Adaptation for ALUP</h1>
<p>With the ALUP being structured very differently, we made some adaptations in order for this mechanism to work:</p>
<ol>
  <li>The protocol has only two communication steps (see <a href="https://github.com/Skyfighter64/ALUP/blob/master/Documentation/Documentation_en-us.md#data-transmission">ALUP Docs</a>), therefore, we  don’t want to do the 4-step handshake from gPTP but rather simplify it to two steps.</li>
  <li>ALUP tries to put as much work on the Sender (gPTPs Master) as possible, therefore we want to track both times on the Sender instead of telling the Receiver (Slave) to correct its own time.</li>
</ol>

<p>So the overall goals are:</p>

<ol>
  <li>Synchronize the Receiver time as often as possible</li>
  <li>Reduce the four steps from gPTP to only  two</li>
  <li>Make the Sender (Master) keep the times instead of the Receiver (Slave)</li>
</ol>

<!---
1. Synchronize the Receiver time with every ALUP Packet
2. Track the time offset $\delta^S_R$ for all ALUP Receivers on the sender
3. When sending a packet: Convert any timestamps (internally) from Sender's local time to the respective Receiver's local time
-->
<p>For this, we needed to turn around the gPTP’s mechanism so that the Sender (Master) has $t_1,t_2,t_3,t_4$ in the end. This step also already reduces the communication effort from 4 step to only two (yay :&gt;).
The new communication diagram looks something like this:</p>

<center>
<figure>
  <img src="/media/alup/time_sync.PNG" alt="Reduced time synchronization graph" width="60%" style="display: block; margin: 0 auto" />
  <figcaption>The simplified time synchronization graph</figcaption>
</figure>
</center>

<p>With:</p>

\[\Delta_t = \frac{(t_2-t_1) + (t_4-t_3)}{2}\]

<p>The delta is still calculated in the same way as before, but you may notice that we do not send $T$ over to the Sender. This is because we don’t need a specific time from the Sender it but can rather use any other time such as $t_3$ for a replacement. This simplifies the communication further</p>

<p>from:</p>

\[T_{Receiver} = T + \Delta_t\]

<p>to:</p>

\[T_{Receiver} = t_3 + \Delta_t\]

<p>Therefore, we now got the receiver’s local time $T_{Receiver}$ successfully from the receiver to the sender with minimal error thanks to $\Delta_t$.</p>

<p>Since it is more practical, instead of storing $T_{Receiver}$ we rather store the offset for the Receiver’s time to the Sender’s time $\delta^S_R$:</p>

\[\delta^S_R = T_{Receiver} - T_{Sender}\]

<p><strong>Note:</strong> For $\delta^S_R$, the $S$ and $R$ denote that this is the offset of the <strong>S</strong>ender’s time and the <strong>R</strong>eceiver’s time.</p>

<p>This way, we don’t need to track the Receiver’s time on the Sender separately, but can instead use the Sender’s system time at any point to calculate the corresponding local time on the Receiver by using:</p>

\[T_{Receiver}  = T_{Sender} + \delta^S_R\]

<p>The last simplification we do is to choose the measuring point for  $T_{Sender}$ smartly. To be more specific, we choose $T_{Sender}$ to be the same as $t_4$.</p>

<h3 id="simplification-of-the-formula">Simplification of the formula</h3>

<p>From before, we found the following formulas:</p>

<ol>
  <li>$\Delta_t = \frac{(t_2-t_1) + (t_4-t_3)}{2}$</li>
  <li>$T_{Receiver} = t_3 + \Delta_t$</li>
  <li>$\delta^S_R = T_{Receiver} - T_{Sender} $</li>
  <li>$T_{sender} = t_4$</li>
</ol>

<p>We combine and simplify them with respect to $\delta^S_R$ in order to get one singular formula which is simpler to write and easier to calculate:</p>

<ul>
  <li>Start with:</li>
</ul>

\[\delta^S_R = T_{Receiver} - T_{Sender}\]

<ul>
  <li>Put in $T_{Receiver} = t_3 + \Delta_t$:</li>
</ul>

\[\delta^S_R = (t_3 + \Delta_t) - T_{Sender}\]

<ul>
  <li>Put in  $T_{Sender} = t_4$:</li>
</ul>

\[\delta^S_R =  (t_3 + \Delta_t) - t_4\]

<ul>
  <li>Put in $\Delta_t = \frac{(t_2-t_1) + (t_4-t_3)}{2}$:</li>
</ul>

\[\delta^S_R = (t_3 + \frac{(t_2-t_1) + (t_4-t_3)}{2}) - t_4\]

<ul>
  <li>Rewrite subtractions without brackets:</li>
</ul>

\[\delta^S_R = \frac{(t_2-t_1) + (t_4-t_3)}{2} - t_4 + t_3\]

<ul>
  <li>Move $t_4, t_3$ onto the fraction:</li>
</ul>

\[\delta^S_R = \frac{(t_2-t_1) + (t_4-t_3)}{2} +\frac{-2\cdot t_4}{2} + \frac{2\cdot t_3}{2}\]

\[\delta^S_R = - \frac{(t_2-t_1) + (t_4-t_3) - (2\cdot t_4) + (2\cdot t_3)}{2}\]

<ul>
  <li>Make it pretty and simplify:</li>
</ul>

\[\delta^S_R = - \frac{t_2-t_1 + t_4-t_3 - 2\cdot t_4 + 2\cdot t_3}{2}\]

\[\delta^S_R = - \frac{t_2-t_1 + \cancel{t_4}-\cancel{t_3} - \cancel{2}\cdot t_4 + \cancel{2}\cdot t_3}{2}\]

\[\rightarrow \delta^S_R = - \frac{t_2 - t_1 - t_4 + t_3}{2}\]

<p>Or:</p>

\[\rightarrow \delta^S_R = - \frac{- t_1+ t_2 + t_3 - t_4 }{2}\]

<p>This formula can calculate the offset from the Receiver’s time to the Senders Time very accurately. We can use it to get the Receiver’s local time from the Sender’s local time very accurately:</p>

\[T_{Receiver} = T_{Sender} + \delta^S_R\]

<p>Note though that the caveats and assumptions from gPTP still apply, therefore:</p>
<ul>
  <li>Timer drift is not considered -&gt; synchronize time often (We do it with every packet)</li>
  <li>Synchronization error increases if the sending and receiving latency are not the same -&gt; Collect multiple time synchronizations and use a running median.</li>
</ul>

<h2 id="closing-words">Closing words</h2>
<p>Now you know in detail how the ALUP time synchronization works and how it was deducted.</p>

<p>In the future, we can use this synchronized time to coordinate the RGB-Colors of multiple Receivers so they can update their LEDs at the same time. Furthermore we can minimize latency and jitter using time stamps in every ALUP Frame.</p>

<p>For reference implementations, see <a href="https://github.com/Skyfighter64/Python-ALUP">Python-ALUP</a> (Sender) and <a href="https://github.com/Skyfighter64/Arduino-ALUP">Arduino-Alup</a> (Receiver). The ALUP Protocol Documentation can be found <a href="https://github.com/Skyfighter64/ALUP">here</a>.
<!--
 See if you can find $t_2$ and $t_3$ somewhere in the protocol headers.
:>
--></p>]]></content><author><name></name></author><category term="timesync" /><summary type="html"><![CDATA[This post is part of a series about my ALUP Protocol side project.]]></summary></entry><entry><title type="html">Driving Around | Hacking a cleaning Robot</title><link href="https://skyfighter64.github.io/robot/2024/10/21/Driving-Around.html" rel="alternate" type="text/html" title="Driving Around | Hacking a cleaning Robot" /><published>2024-10-21T21:51:33+00:00</published><updated>2024-10-21T21:51:33+00:00</updated><id>https://skyfighter64.github.io/robot/2024/10/21/Driving-Around</id><content type="html" xml:base="https://skyfighter64.github.io/robot/2024/10/21/Driving-Around.html"><![CDATA[<p><em>This post is part of a series about hacking and modding a cleaning robot.</em></p>

<h2 id="introduction">Introduction</h2>

<p>In my last post I described how I got a microcontroller to interact with the motor drivers on the robots mainboard. After that it was time to actually write some software so that we can use the wheels for more than just driving forward at full speed.</p>

<p>I want to use the robot for more demanding tasks such as automated driving in the future. For this to work, I needed a bigger framework of all kinds of features and functionality, which is why I started to develop <a href="https://github.com/Skyfighter64/Robocore">Robocore</a>.</p>

<h2 id="the-motor-driver-script">The motor driver script</h2>
<p>To set the speed and direction of each wheel using a simple function call like:</p>

<div class="language-cpp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">//drive forward</span>
<span class="n">leftWheel</span><span class="p">.</span><span class="n">Drive</span><span class="p">(</span><span class="n">speed</span><span class="o">=</span><span class="mi">255</span><span class="p">)</span>
</code></pre></div></div>
<p>I needed some kind of script communicating with the motor drivers on the mainboard.</p>

<p>This would then be helpful for more abstract ways of driving like using inverse kinematics (more on that later).</p>

<p>The task for the motor driver script would be to drive one of the wheels in the following way:</p>
<ul>
  <li>Drive the motor with a given speed</li>
  <li>Drive in reverse when a negative speed is given</li>
  <li>Be able to drive at slower than full speed (eg. 50% speed)</li>
  <li>Ignore invalid speeds (eg. 200% speed)</li>
  <li>Limit the motor to a minimum speed</li>
  <li>Do all this while using the two-wire signal inputs of the motor drivers on the mainboard</li>
</ul>

<p>Realizing this was very straight forward. I only needed to provide the PWM-Signals corresponding to a desired speed to the forward or backwards inputs of the motor driver and check if it’s within certain boundaries.</p>

<p>The minimum speed requirement mentioned above was needed since I found out that the motors were actually not turning for PWM-Signals below a certain threshold. This way I can just turn the motors off if such a speed is given, hopefully preventing them from getting fried.</p>

<p>For future use I also added some functions to retrieve the current speed and driving direction from the motor. These might be helpful in the future.</p>

<p>The resulting code can be found in <a href="https://github.com/Skyfighter64/Robocore/blob/main/src/core/motor_driver.cpp">motor_driver.cpp</a> inside my <a href="https://github.com/Skyfighter64/Robocore">Robocore</a> project. It contains a class representing one single motor and interacts with the corresponding motor controller to provide functions for getting and setting the motor speed.</p>

<h2 id="inverse-kinematics">Inverse Kinematics</h2>

<h3 id="forward-kinematics">Forward Kinematics</h3>
<p>In robotics, the (forward) kinematics of a mobile robot usually describe the formulas and equations which tell the robot the speed and trajectory it is driving, given the speed and steering angle of all of its wheels. This obviously depends on the number of wheels, wheel placement, and their respective freedom of movement. Or maybe a robot might not even have wheels, but rather tracks like a tank or legs like a dog.</p>

<h3 id="inverse-kinematics-1">Inverse Kinematics</h3>
<p>In contrast, the inverse kinematics describe the exact opposite. They are used to calculate how the robot needs to turn its wheels in order to drive on a given trajectory. This again heavily depends on the robots wheel configuration.</p>

<p>This is exactly what I needed to drive my robot around later when using mapping and autonomous navigation. 
Luckily, my robot has only two wheels, each of them being opposite of each other. Both wheels are only able to turn forward or backwards. Such a wheel configuration is called a “differential drive” since the difference in speed of both wheels determine if and how much the robot is turning.</p>

<figure>
    <img src="/media/robot/differential_drive_robot.png" alt="Sketch of a differential drive robot" />
  <figcaption>Rough sketch of the top view of a differential drive robot</figcaption>
</figure>

<p>There are many resources on the internet describing differential drive kinematics, but just copying and pasting formulas is boring :)
That’s why I decided to do a little math myself and derived the inverse kinematics by hand. This way I also got full control on the details
and knew what to do when implementing them into my Robocore project.</p>

<h3 id="deriving-differential-drive-inverse-kinematics">Deriving Differential Drive Inverse Kinematics</h3>
<p>In the beginning, I first had to actually specify how I wanted the kinematics to work. Depending on the robot and use case, the details here might differ from implementation to implementation.</p>

<p>My first Idea was to give the robot a trajectory as a circle radius and the driving speed. This way the robot would set the speed of its wheels to follow this given circle.</p>

<figure>
    <img src="/media/robot/circle_driving.png" alt="Robot driving on circle trajectory" />
  <figcaption>Robot driving on circle trajectory</figcaption>
</figure>
<p>This approach however turned out to be unfitting for my purposes for the following reasons:</p>
<ol>
  <li>
    <p>Two of the most common movements require ugly edge cases.
Driving straight forward requires the radius to be infinitely large and turning on the spot would need a radius of 0.
Both of these cases are difficult to represent and have to be checked manually.</p>
  </li>
  <li>
    <p>Always giving a robot a circular trajectory might not be very useful for other tasks. Navigation for example would require some extra maths just to calculate a circular trajectory if we want to drive to a point in worldspace (This wouldn’t even be the shortest path for a differential drive robot).</p>
  </li>
</ol>

<p>Therefore, I found another way to tell the robot where to go, using a rotation angle and the desired speed.
The idea behind that is that I later could create a vector from the robots position to the desired goal point, calculate its angle with respect to the robot, and use this information to tell the robot how to turn.</p>

<p>To achieve this, the robot should have the following behavior:</p>

<table>
  <thead>
    <tr>
      <th>Input Angle (deg | rad)</th>
      <th>Left Wheel  Speed</th>
      <th>Right Wheel  Speed</th>
      <th>Resulting action</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>0° | 0</td>
      <td>100%</td>
      <td>100%</td>
      <td>Drive straight forward</td>
    </tr>
    <tr>
      <td>45° | pi/4</td>
      <td>0%</td>
      <td>100%</td>
      <td>Drive a counterclockwise circle around the left wheel</td>
    </tr>
    <tr>
      <td>90° | 2*pi/4</td>
      <td>-100%</td>
      <td>100%</td>
      <td>Turn counterclockwise on the spot</td>
    </tr>
    <tr>
      <td>135° | 3*pi/4</td>
      <td>-100%</td>
      <td>0%</td>
      <td>Drive a counterclockwise circle around the right wheel (backwards)</td>
    </tr>
    <tr>
      <td>180° | 4*pi/4</td>
      <td>-100%</td>
      <td>-100%</td>
      <td>Drive straight backwards</td>
    </tr>
    <tr>
      <td>225° | 5*pi/4</td>
      <td>0%</td>
      <td>-100%</td>
      <td>Drive a clockwise circle around the left wheel (backwards)</td>
    </tr>
    <tr>
      <td>270° | 6*pi/4</td>
      <td>100%</td>
      <td>0%</td>
      <td>Turn clockwise on the spot</td>
    </tr>
    <tr>
      <td>315° | 7*pi/4</td>
      <td>100%</td>
      <td>100%</td>
      <td>Drive a clockwise circle around the right wheel</td>
    </tr>
  </tbody>
</table>

<p>In between each edge case the speed of one wheel should stay at 100% while the other wheels speed increases / decreases linearly.</p>

<p>Visually, this would look somewhat like this:</p>

<figure>
    <img src="/media/robot/driving_angles.png" alt="Visualization of the driving angle edge cases" />
  <figcaption>Visualization of the driving angle edge cases and the corresponding trajectory</figcaption>
</figure>

<p>Here, the rectangle seen represents a function of the angle which returns the speed of the wheels. Note that the axes for the wheel speeds are rotated by 45° with respect to the direction the robot itself is facing (x-axis).</p>

<p>To derive said rectangular function, we divided the mentioned edge cases into four sections where one wheel remains with constant speed.
For the non-constant speed of the other wheel, we used simple trigonometry, namely tangens, to get the speed percentages for each angle.</p>

<figure>
    <img src="/media/robot/trigonometry.png" alt="Example for the trigonometry for one of the four sections" />
  <figcaption>Example for the trigonometry for one of the four sections</figcaption>
</figure>

<p>This resulted in the following inverse kinematics:</p>

<table>
  <thead>
    <tr>
      <th style="text-align: center">Start to end Angle Theta (deg | rad)</th>
      <th>x_l  (left wheel speed)</th>
      <th>x_r (right wheel speed)</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: center">(0°-90° | 0-pi/2)</td>
      <td>tan(pi/4 - Theta)</td>
      <td>1</td>
    </tr>
    <tr>
      <td style="text-align: center">(90°-180° | pi/2-pi)</td>
      <td>-1</td>
      <td>tan(3*pi/4 - Theta)</td>
    </tr>
    <tr>
      <td style="text-align: center">(180°-270° | pi-1.5*pi)</td>
      <td>tan(Theta - 5*pi/4)</td>
      <td>-1</td>
    </tr>
    <tr>
      <td style="text-align: center">(270°-360° | 1.5<em>pi-2</em>pi)</td>
      <td>1</td>
      <td>tan(Theta - 7*pi/4)</td>
    </tr>
  </tbody>
</table>

<p>This inverse kinematic function was then implemented into Robocore inside <a href="https://github.com/Skyfighter64/Robocore/blob/main/src/core/differential_drive.cpp">differential_drive.cpp</a> as function <code class="language-plaintext highlighter-rouge">DifferentialDriveInverseKinematics(double angle)</code> and then combined with the motor driver software to set the speed of the robots wheels. This now made me able to easily control the robot by specifying a speed and trajectory angle.</p>]]></content><author><name></name></author><category term="robot" /><summary type="html"><![CDATA[This post is part of a series about hacking and modding a cleaning robot.]]></summary></entry><entry><title type="html">Making the wheels spin | Hacking a cleaning Robot</title><link href="https://skyfighter64.github.io/robot/2024/10/19/Making-The-Wheels-Spin.html" rel="alternate" type="text/html" title="Making the wheels spin | Hacking a cleaning Robot" /><published>2024-10-19T22:00:33+00:00</published><updated>2024-10-19T22:00:33+00:00</updated><id>https://skyfighter64.github.io/robot/2024/10/19/Making-The-Wheels-Spin</id><content type="html" xml:base="https://skyfighter64.github.io/robot/2024/10/19/Making-The-Wheels-Spin.html"><![CDATA[<p><em>This post is part of a series about hacking and modding a cleaning robot.</em></p>

<h2 id="introduction">Introduction</h2>
<p>In my last post (found <a href="/robot/2024/09/22/Hacking-A-Cleaning-Robot.html">here</a>) I dove deep into the robot’s PCB and took a look at its features. Here I want to show how I found a way to integrate a microcontroller into the robot PCB and use it to control the wheels.</p>

<p>To achieve this three things were needed:</p>
<ul>
  <li>Selecting a suitable microcontroller</li>
  <li>Finding a way to power the microcontroller from the robot battery</li>
  <li>Finding a way to inject driving signals into the motor drivers</li>
</ul>

<h2 id="choosing-a-microcontroller">Choosing a microcontroller</h2>
<p>Finding a suitable microcontroller was probably the easiest part. To make the most of the robot, I defined the following requirements:</p>
<ul>
  <li>As powerful as possible</li>
  <li>Energy efficient (powered from the robot)</li>
  <li>Being able to power it via 5V</li>
  <li>5V logic (GPIO) voltage</li>
  <li>Integrated FPU and 32bit arithmetics</li>
  <li>As much RAM as possible (at least more than 2kB)</li>
  <li>Any kind of additional connectivity (Wifi, Bluetooth, LoRa, etc…) would be cool</li>
</ul>

<p>Therefore, an arduino was not an option. While it draws very little power the ATMega328p on the Uno and Nano-Boards is not very powerful and only comes with 2048B of RAM which is definitely not enough.</p>

<p>The next best thing I found was an ESP-32 Wroom Dev Board I had lying around. This microcontroller comes with a 32bit CPU, way more RAM, and fulfils most of the requirements.
The only downside was it’s logic voltage level being 3.3V, but this problem was easily fixed using by some cheap logic-level shifters.</p>

<p>Another nice feature of this microcontroller is the built in Wifi and Bluetooth, considering that I wanted to add a bluetooth controller to the project in the future.</p>

<h2 id="extracting-power">Extracting power</h2>
<p>To power the microcontroller, I wanted to use the 5V logic voltage of the robot. The corresponding voltage regulator can supply up to 1.5A of current, which is more than enough for the ESP32-Board.</p>

<p>While looking over the robot’s main board, I found two unused spots for JST-Connectors on the circuit board, both of which are connected to GND and 5V. By soldering new headers onto one of them, I got a nice way of powering my external electronics.</p>

<figure>
  <img src="/media/robot/mainboard_power_extraction.jpg" alt="Power extraction points on the main pcb" />
  <figcaption>The newly soldered JST-Header</figcaption>
</figure>

<h2 id="injecting-signals-into-the-motor-drivers">Injecting signals into the motor drivers</h2>
<p>To use the microcontroller for driving the robot, I had to find a way to connect the ESP32’s GPIO-Pins to the input of the motor drivers.
Remembering what I’ve learned from the PCB-Analysis, I figured that the best way to approach this would be to replace the pins 8,9,12 and 13 of the main IC with my own GPIO signals.</p>

<p>My first idea was to cut off the corresponding legs of the main IC and solder the cables to the remaining holes. This turned out to be very hard, since reaching the little legs with even the smallest side-cutters I had was next to impossible, and unsoldering the whole chip would be pretty inconvenient.</p>

<p>That’s why I a look at the traces of the main IC’s pins again. I found some resistors which seemed to be the input points of the motor drivers and soldered my wires directly to them. To stop the old IC from interfering with my signals, I used a sharp knife to cut the PCB traces at its base.</p>

<figure>
    <img src="/media/robot/mainboard_signal_injection.jpg" alt="Signal injection to the motor drivers of the main pcb" />
  <figcaption>Injection points for the left and right forward/backwards signals</figcaption>
</figure>

<h2 id="testing-the-connections">Testing the connections</h2>
<p>To test if everything worked as expected, I wrote a simple script for the ESP32 which turns the connected pins on and of for a certain amount of time. Running this script revealed that everything worked as expected. I could observe the wheels turning and stopping, which meant that my signal injection was a success. I found out that even using PDM-Signals to control the speed of each wheel worked almost flawlessly.</p>

<p><br /></p>
<p align="center">
<img src="/media/robot/driving.gif" alt="Robot 
Wheel turning" width="300" />
</p>]]></content><author><name></name></author><category term="robot" /><summary type="html"><![CDATA[This post is part of a series about hacking and modding a cleaning robot.]]></summary></entry><entry><title type="html">Topan AVC701 PCB Analysis | Hacking a cleaning Robot</title><link href="https://skyfighter64.github.io/robot/2024/09/22/Hacking-A-Cleaning-Robot.html" rel="alternate" type="text/html" title="Topan AVC701 PCB Analysis | Hacking a cleaning Robot" /><published>2024-09-22T22:02:54+00:00</published><updated>2024-09-22T22:02:54+00:00</updated><id>https://skyfighter64.github.io/robot/2024/09/22/Hacking-A-Cleaning-Robot</id><content type="html" xml:base="https://skyfighter64.github.io/robot/2024/09/22/Hacking-A-Cleaning-Robot.html"><![CDATA[<h2 id="introduction">Introduction</h2>
<p>The other day I got my hands onto a “BRUNEAU TP-AVC701” or “Topan TP-AVC701” cleaning robot and wanted to make it drive autonomously using a Microcontroller. After opening it up and looking at the rather
simple PCB, I thought it might be possible to inject signals directly into it, eliminating the need for new motor controllers 
and a battery charging circuit.</p>

<p>That’s why I analyzed the contents of the PCB and traced all copper lines and parts to its original purpose.</p>

<p>Rainer Rebhan also hacked this robot, if you don’t want to keep the old PCB is suggest checking out <a href="http://www.rainer-rebhan.de/proj_saugrob.html#">his website</a>.</p>

<h2 id="robot-overview">Robot Overview</h2>

<p>The robot itself is rather simple. It drives around in fixed
 patterns, reversing and turning away if it hits something with its front bumper.
It is turned on using a switch at the top and can be charged using an external 14.4V Power Supply.</p>

<p>The following images show the robot from the outside and inside:</p>

<figure class="half" style="display:flex">
    <img style="width:350px" src="/media/robot/robot.PNG" alt="The Topan TP-AVC701 Cleaning Robot" />
    <img style="width:350px" src="/media/robot/robot_inside.jpg" alt="Inside the robot" />
</figure>

<p>As you can see, there is a lot of space for modifications and tinkering. Note that I already removed some unneeded parts like the vacuum motor.</p>

<h2 id="main-pcb-features">Main PCB Features</h2>
<p>The default PCB already comes with a good set of features, namely:</p>
<ul>
  <li>Battery Charging/Discharging logic</li>
  <li>Motor Driver for both wheel motors</li>
  <li>On/Off control for the vacuum motor</li>
  <li>Collision Detection using the front bumper</li>
  <li>Two DC/DC Converters</li>
</ul>

<h2 id="pcb-sections">PCB Sections</h2>
<p>The following image shows the different purposes I determined for the different areas of the main PCB. Each feature is explained in more detail below.
<img src="/media/robot/main_pcb.PNG" alt="Main PCB Blocks" width="1000" /></p>

<h3 id="6-pin-connector">6-Pin Connector</h3>
<p>The 6 Pin JST connector on the right side connects the main PCB to two small daughterboards in the casing. These connect to the battery, PSU and ON/OFF switch. It also connects directly to the vacuum motor.</p>

<table>
  <thead>
    <tr>
      <th>Pin</th>
      <th>Purpose</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1</td>
      <td><code class="language-plaintext highlighter-rouge">V_battery</code> (Positive battery terminal)</td>
    </tr>
    <tr>
      <td>2</td>
      <td><code class="language-plaintext highlighter-rouge">V_psu</code> (Positive Power supply terminal)</td>
    </tr>
    <tr>
      <td>3</td>
      <td>Power Switch (ON: <code class="language-plaintext highlighter-rouge">V_battery</code>, OFF: NC)</td>
    </tr>
    <tr>
      <td>4</td>
      <td>Ground</td>
    </tr>
    <tr>
      <td>5</td>
      <td>Vacuum Motor +</td>
    </tr>
    <tr>
      <td>6</td>
      <td>Vacuum Motor -</td>
    </tr>
  </tbody>
</table>

<p>Note that the Power Switch at Pin 3 is connected to V_battery if the switch is turned ON. This serves as the main power source for the robot when running. Connector Pin 1 on the other hand is always connected to the Battery and is used to charge the robot when a PSU is present.</p>

<h3 id="main-ic">Main IC</h3>

<p>The main IC seems to be either a microcontroller or (more likely) a custom made ASIC. It controls basically everything on the robot including:</p>
<ul>
  <li>Both driving Motor’s speed</li>
  <li>Vacuum Motor ON/OFF</li>
  <li>Battery charging</li>
  <li>Blinky LEDs</li>
  <li>Collision Detection</li>
</ul>

<p>Picture of the main IC:
<img src="/media/robot/main_ic.PNG" alt="Pinout for the main IC" width="600" /></p>

<h4 id="pinout">Pinout:</h4>

<table>
  <thead>
    <tr>
      <th>Mode</th>
      <th>Purpose</th>
      <th style="text-align: center">Pin</th>
      <th style="text-align: right"> </th>
      <th> </th>
      <th> </th>
      <th style="text-align: center"> </th>
      <th> </th>
      <th style="text-align: right"> </th>
      <th> </th>
      <th style="text-align: center">Pin</th>
      <th>Purpose</th>
      <th>Mode</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td> </td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td style="text-align: right"> </td>
      <td>┌</td>
      <td>─</td>
      <td style="text-align: center">\_/</td>
      <td>─</td>
      <td style="text-align: right">┐</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td> </td>
    </tr>
    <tr>
      <td>Output</td>
      <td>Green LED</td>
      <td style="text-align: center">1</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td>0</td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">14</td>
      <td>Red LED</td>
      <td>Output</td>
    </tr>
    <tr>
      <td>Input</td>
      <td>Light Barrier</td>
      <td style="text-align: center">2</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">13</td>
      <td>Right Wheel Forward</td>
      <td>Output</td>
    </tr>
    <tr>
      <td> </td>
      <td>NC</td>
      <td style="text-align: center">3</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">12</td>
      <td>Right Wheel Reverse</td>
      <td>Output</td>
    </tr>
    <tr>
      <td> </td>
      <td><code class="language-plaintext highlighter-rouge">V_logic</code> (5V)</td>
      <td style="text-align: center">4</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">11</td>
      <td>Ground</td>
      <td> </td>
    </tr>
    <tr>
      <td>Input</td>
      <td>Comparator</td>
      <td style="text-align: center">5</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">10</td>
      <td>Battery Charging Enable</td>
      <td>Output</td>
    </tr>
    <tr>
      <td>Output</td>
      <td>Vacuum Motor On/OFF</td>
      <td style="text-align: center">6</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">9</td>
      <td>Left Wheel Reverse</td>
      <td>Output</td>
    </tr>
    <tr>
      <td>Input</td>
      <td>Sense PSU Presence</td>
      <td style="text-align: center">7</td>
      <td style="text-align: right">─</td>
      <td>┤</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td style="text-align: right">├</td>
      <td>─</td>
      <td style="text-align: center">8</td>
      <td>Left Wheel Forward</td>
      <td>Output</td>
    </tr>
    <tr>
      <td> </td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td style="text-align: right"> </td>
      <td>└</td>
      <td>─</td>
      <td style="text-align: center">─</td>
      <td>─</td>
      <td style="text-align: right">┘</td>
      <td> </td>
      <td style="text-align: center"> </td>
      <td> </td>
      <td> </td>
    </tr>
  </tbody>
</table>

<h3 id="motor-drivers">Motor Drivers</h3>
<p>There are two motor drivers, each consisting of a H-Bridge and some upstream logic.</p>

<p>The following pins from the main IC are used for driving:</p>

<table>
  <thead>
    <tr>
      <th>Pin NR</th>
      <th>Motor</th>
      <th>Purpose</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>13</td>
      <td>Right</td>
      <td>Forward</td>
    </tr>
    <tr>
      <td>12</td>
      <td>Right</td>
      <td>Reverse</td>
    </tr>
    <tr>
      <td>9</td>
      <td>Left</td>
      <td>Reverse</td>
    </tr>
    <tr>
      <td>8</td>
      <td>Left</td>
      <td>Forward</td>
    </tr>
  </tbody>
</table>

<p>The previously mentioned extra logic makes sure that the robot does not try to
drive forward and reverse at the same time:</p>

<table>
  <thead>
    <tr>
      <th>Pin 8 | 13 (Forward)</th>
      <th>Pin 9 | 12 (Reverse)</th>
      <th>Motor Driver Output</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>0</td>
      <td>0</td>
      <td>Stop</td>
    </tr>
    <tr>
      <td>0</td>
      <td>1</td>
      <td>Reverse</td>
    </tr>
    <tr>
      <td>1</td>
      <td>0</td>
      <td>Forward</td>
    </tr>
    <tr>
      <td>1</td>
      <td>1</td>
      <td>Reverse</td>
    </tr>
  </tbody>
</table>

<h2 id="charging-circuit">Charging Circuit</h2>
<p>The Charging logic consists of a Voltage Sensing Circuit and a Charging Activation Circuit controlled by the main IC.</p>

<p>Using its two internal Comparators, the lm393N checks the reduced PSU/Battery Voltage <code class="language-plaintext highlighter-rouge">max(V_battery, V_PSU)</code> and the Motor Voltage <code class="language-plaintext highlighter-rouge">V_motor</code> against the 5V logic Voltage <code class="language-plaintext highlighter-rouge">V_logic</code>  and outputs a logic 1 to the main IC if either of the Voltages is greater than <code class="language-plaintext highlighter-rouge">V_logic</code>.</p>

<p>In logic terms, this would be:</p>

<div class="language-py highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">out</span> <span class="o">=</span> <span class="p">(</span><span class="n">V_motor</span> <span class="o">&gt;=</span> <span class="n">V_logic</span><span class="p">)</span> <span class="ow">or</span> <span class="p">(</span><span class="nb">max</span><span class="p">(</span><span class="n">V_psu</span><span class="p">,</span> <span class="n">V_switch</span><span class="p">)</span> <span class="o">&gt;=</span> <span class="n">V_logic</span><span class="p">)</span>
</code></pre></div></div>

<blockquote>
  <p><strong>Note:</strong> <br />
The exact purpose of the Comparator is currently unclear to me. Unfortunately, I don’t have the original PSU anymore and therefore cannot confirm my theories.</p>

  <p>The measurements using another Power Supply did not replicate my expected behavior so take this section with a grain of salt.</p>
</blockquote>

<p>Depending on the logic signal of the comparator (and maybe something else), the main IC enables the battery charging logic, which checks if a PSU is present, and if so, connects <code class="language-plaintext highlighter-rouge">V_battery</code> and <code class="language-plaintext highlighter-rouge">V_PSU</code> together using a MOSFET.</p>

<h3 id="light-barrier">Light Barrier</h3>
<p>The light barrier is used for detecting collisions using the front bumper. If the robot bumps into something while driving, a plastic piece blocks the light barrier and the robot stops, reverses and turns around.</p>

<h2 id="power-distribution">Power Distribution</h2>
<p>The PCB has two main Power Sources: The Battery and an external Power Supply. 
Depending on the power source used, two different DC/DC Converters create the voltages needed by the components and motors.</p>

<p>Here is a quick overview over the different power rails:
<img src="/media/robot/power_rails.PNG" alt="PCB power rails overview" width="1000" />
Note that the Image is mirrored to make it easier matching the components on the other side.</p>

<h3 id="l7805cv-voltage-regulator">L7805CV Voltage Regulator</h3>
<p>This Voltage regulator receives the variable voltage from either the Battery or the Power Supply (if connected) and regulates it down to a constant 5.02V. It is used to power the main IC and the light barrier.</p>

<p>According to its data sheet, it has all kinds of protection circuits and can provide up to 1.5A of current (although this requires an additional heat sink).</p>

<h3 id="34063ap1-buckboost-converter">34063AP1 Buck/Boost Converter</h3>
<p>This is the second DC/DC Converter on this board. Its purpose is to provide 8.34V to the motor controller of both driving motors.</p>

<p>It only delivers power when the robot is in battery powered mode and the switch is turned on.</p>

<h2 id="vacuum-motor">Vacuum Motor</h2>
<p>The Vacuum motor is using the battery voltage <code class="language-plaintext highlighter-rouge">V_Switch</code> for power. It can be switched ON/OFF by the main IC using a MOSFET. For my purposes it was not needed anymore, which is why I removed it.</p>]]></content><author><name></name></author><category term="robot" /><summary type="html"><![CDATA[Introduction The other day I got my hands onto a “BRUNEAU TP-AVC701” or “Topan TP-AVC701” cleaning robot and wanted to make it drive autonomously using a Microcontroller. After opening it up and looking at the rather simple PCB, I thought it might be possible to inject signals directly into it, eliminating the need for new motor controllers and a battery charging circuit.]]></summary></entry></feed>