GradientOne provides several tools for performing meta-analyses of results. As devices become increasingly complex, the point or points of failure can be harder to identify. Incorporating data mining techniques such as machine learning or exploration into the process of brining new products to market can help at multiple stages in the pipeline. It can help in the research and development phase by predicting the possible avenues of implementation most likely to be fruitful. It can help in the manufacturing stage by identifying faulty equipment before the final steps in the process, thus saving operation costs and time, and it can help in the market stage by identifying devices that need to be recalled or patched before failures are reported. This first post will go over how to visualize and explore categorical data with this hypothetical example.
A robotics company wants to investigate the cause of failures. They have iterated the design of their robots over the years, such that although each robot is running the same software and has the same chassis, the components may have changed. For example, some robots have Nylon Supportsinstead of aluminum ones, some robots have a Copper Heatsink on the CPU, some robots have Mecanum Wheels instead of regular ones, and some robots have Lithium Batteries instead of Ni-Cad batteries. The robotics company has a record of all of the components that went into each of their robots, and uploads each part manifest as a result, where a 0 means that that part is not in the robot, and 1 means that part is in the robot:
Since this data is categorical, and not numerical, looking at the scatterplots does not yield useful results:
Instead, the Decision Tree meta analysis is the best tool for categorical data. Since the Decision Tree analysis is a supervised learning analysis, whereas the Scatterplot Matrix was an unsupervised learning analysis, the test engineer will need to provide additional input: the dependent variable. In this case, the dependent variable is the pass/fail Result:
View Results takes you to the Decision Tree. Two trees are generated: the Optimal Tree, that separates the most passes from failures using the fewest decision points:
From this tree, we can see that the combination of Mecanum wheels and a copper heatsink or a lithium battery results will likely result in a failure, and that the combination of regular wheels, alumnium supports, and a copper heatsink also leads to failures. Scrolling down to the partition plot, we can see that this tree has a good partition of passes and failures: there are relatively few passes in a section with a lot of failures, and vice-versa:
The other tree, which can be seen by changing the value in the value in the dropdown. The Symmetric Tree expands all the parameters in the same order. This can be used for data exploration:
The partition plot for the symmetric graph has more partitions than the optimal tree, but still has a good separation between the passes and failures. From this graph, we can see that the mecanum wheels are the primary cause of failures, though it's harder to tell how the other components contribute:
GradientOne provides several tools for performing meta-analyses of results. As devices become increasingly complex, the point or points of failure can be harder to identify. Incorporating data mining techniques such as machine learning or exploration into the process of brining new products to market can help at multiple stages in the pipeline. It can help in the research and development phase by predicting the possible avenues of implementation most likely to be fruitful. It can help in the manufacturing stage by identifying faulty equipment before the final steps in the process, thus saving operation costs and time, and it can help in the market stage by identifying devices that need to be recalled or patched before failures are reported. This first post will go over how to visualize and explore numerical data with this hypothetical example.
A manufacturer of LED TVs wants to investigate the causes of pixel failure in their TVs. If there are no dead pixels in a screen, the screen is tagged as a Pass. If a single pixel on a screen is dead, it is tagged as a Warning, because most people won't notice a single dead pixel. If more than one pixel on the screen is dead, it is tagged as a Fail.
During the testing phase, the gain (meaning how much current is produced for the incident light intensity), current, and the temperature for each pixel is measured. These results were generated by running a configuration called "pixel". The average gain, average current, average temperature and max temperature are stored in the metadata section of each result:
Since this data is numerical, the first thing a test engineer can do to investigate the cause of failures is to look at the scatterplots. On the Analysis page, the test engineer would click the checkbox next to Scatterplot Matrix, under Meta Analysis Suites and then on Run Selected
On the modal, enter pixel in the Select Data box, and wait for the modal to show showing results for "pixel". Then click on Select All
Then scroll all the way down the modal to the run button:
The gears will turn as the metadata is compiled, and a link to the meta result will appear after the command is complete:
The View Results link will take you to the scatterplot. The factors that appear on the x and y axis can be changed using the drop-downs to the right of the chart:
By selecting the gain and current values as axes, we can see a cluster of passes around the high-gain low-current corner of the plot, but no clear cluster distinction between the warnings and failures:
However, when the axes are changed to average an max temperature, all three clusters become apparent:
From this, the test engineer can conclude that although there are correlations between gain, current, and passing screens, the real cause is likely to be the temperature.
GradientOne provides a CAN packet sniffer, like wireshark or Copley's CANView. The CAN logger can capture all CAN frames that are written and read to the CAN interface connected to an attached interface. It can pick up frames generated by a configuration run, but the frames will only be available after the configuration has finished. The gateway client must first be in a "Ready" state before the logger can be started. This allows you to understand what is being done by the GradientOne movement configuration, or keep a record of all frames collected.
Go to /canlogs, and click on the Start button. The button will disappear while the web interface instructs the gateway client to start sending frames:
The Start button may become available before or after new frames appear in the table, depending on what commands are also in the queue. After clicking the Start button, it will remain on until the Stop button is clicked. Stop and Clear also behave like Start: they send commands to the gateway client. Once Start is pressed, you should start seeing the frames sent by the status checker. If there are no commands in the queue, the status checker will run approximately once per minute. The status checker turns on heartbeats for each node and then queries the registers via an SDO, so you will see heartbeats and SDO responses in the incoming frames:
You can use the "Filter to:" selection dropbox to view only SDO responses:
Or, you can filter out frames generated by the status checker by clicking on Exclude Status Frames:
In another window, you can run frames in the editor, such as initiating an SDO read of the motor's position:
After the editor config completes, these frames will show up in the CAN logger:
Logs can either be Downloaded as an CSV, or saved on the GradientOne instance by clicking on Save. In both cases, the log will only be the frames that are on the screen. The Logs will be saved as the unix timestamp of when the logs were saved:
Once a log is loaded, you can share it with others who have access to the GradientOne instance by copying and pasting the URL.
Using Python With GradientOne's APIs
The adoption of Python as a programming language for hardware test development and instrument control has accelerated significantly over the past decade. Python’s ease-of-use, portability, and error reduction make it an ideal language for beginners as well as professionals. A recent study by IEEE ranked Python as the most popular programming language in the world.
In this blog post we’ll show how someone can leverage GradientOne’s Hybrid Cloud Test Lab features by combining Python and GradientOne’s instrument control APIs. This approach allows for normal lab operations and uploading data to the cloud. It supports semi-automated testing for use cases that involve hardware lab, manufacturing line, or field operations.
For this example of Hybrid Cloud Lab I have an Aardvark I2C/SPI Host Adapter, an I2C board, a Tektronix MSO5204B oscilloscope, a laptop, and a GradientOne Gateway. The Python program runs on the laptop.
The basic steps that occur when the Python program starts:
Initialize Cloud Defined Oscilloscope Setup
One of the capabilities that GradientOne provides is a cloud based, centralized test plan repository. This allows for fast and easy re-use of previously vetted, test routines, all of which are reviewable and editable via a web browser. Highlights of this configuration are shown below:
The Python script issues an HTTP POST specifying the name of the oscilloscope configuration to be used in the test.
URL - https://acme.gradientone.com/commands
Oscilloscope configuration = “I2C_Test”
When this occurs the GradientOne Gateway receives the configuration and loads it on the oscilloscope.
Below is an excerpt of the Python script that controls the initialization of the oscilloscope. (Note: not all aspects of the variable instantiation and authentication are shown for brevity.)
Generate I2C Signals
The Python script will generate I2C signals on the Aardvark, that are sent to the I2C activity board. I have slightly modified the script for this example (see the excerpt below), but the base script is provided by Total Phase and is available here. The Tektronix scope leads are connected to the I2C activity board.
The script used for generating the I2C signals takes the following arguments:
The signal is subsequently acquired by the oscilloscope and transmitted to the cloud via the GradientOne Gateway. The Gateway takes care of the data management and network optimization for a fast and simple upload to the cloud.
Putting It All Together
For the example I will initiate the script with the following arguments:
The test result is now available via the GradientOne platform. The test result includes the raw data (time series/voltages) and all available meta data (scope settings, DUT settings, user, date time stamp, etc).
We also store the image from the scope to provide the user with additional information and confirmation.
The benefit of storing all the raw data is it leverages the benefits of cloud computing and contemporary web browser technologies. I can zoom in (shown below), use markers, compute deltas, as well as take advantage of advanced analysis features such as anomaly detection, waveform matching, custom measurements, and more.
Many organizations we work with have significant investments in their existing test labs, but are looking for a way to capitalize on the benefits of cloud computing's analysis, storage, and data visualization capabilities. The Hybrid Cloud Test Lab approach is an easy way to get started.
We've achieved an era where the limiting factor in the speed of digital communications is no longer the speed of processing the received or transmitted signals, but the fall and rise times of electronic signals. Minimizing the rise/fall times is a particularly difficult problem to optimize as it is often dependent on the cumulative resistances and impedances in the circuit, which can also vary from circuit to circuit depending on the manufacturing defects of the components.
A common method for performing these rise/fall time measurements is to look at a signal on an Oscilloscope, zoom in to the transition edges, put the cursors over the transition edges, and write down the time delta in a spreadsheet. This method takes about 30 minutes per signal.
GradientOne can automate every step in this process - from the data acquisition, to identifying transition edges, calculating steps, and then filtering the results based on performance criteria.
GradientOne's analysis suites provides a way to set up and perform batch rise/fall times, and then filter on the stored traces where the rise/fall times were outside of desired operating parameters. This blog post will cover how GradientOne calculates rise and fall times, as well as how to set up and perform a fall/rise time filtering.
Determining high and low levels
In order to find transition times, we first have to identify the high and low signals. This is a task that we humans can do with ease, but that is difficult for machines. For example, a signal that is not properly grounded can lead to wandering low signals:
Or, signals can have a Poissonian-distributed event (Shot noise) outside of the range seen by the thermal noise (Johnson-Nyquist) that throws off the range of voltages:
Because of Shot noise, we can't simply take the lowest and highest points in the signal to be the minimum and maximum values of the voltage trace, because of this wandering bottom signal, we can't simply take the two peaks of a histogram. To solve this problem, GradientOne instead uses Kernal Density Estimation. This means we turn each point into a Gaussian distribution centered around the voltage, and then sum these Gaussians, so that instead of a histogram we get a smoothed probability distribution:
We then take the derivative of our probability density function, and then take the maximum two locations as our upper and lower signals. For example, in the example of the wandering lower signal, our distribution looks like:
Which leads to the detected low and high signals of:
In the shot noise example, we have two evenly-distributed humps in the histogram:
And a calculated low/high voltage range that ignores the Shot noise:
The rest of this blog post will cover how to set up a rise/fall measurement suite with a pass/fail criteria. First, go to the analysis tab and click "Create New Measurement Suite". Select step 1 as a Rise/Fall time measurement. The channel we will be looking at is channel 1 of oscilloscope data, and we will set the transition range as 0.1. Standard values for rise/fall time measurements are 30/70% (0.3) and 10/90% (0.1). The range is calculated as the difference between the low and high lines on the graph above. The start of a rise is identified when the voltage crosses from low to above 10% of this range. The end of a rise is identified when the Voltage crosses above 90% of this range.
The second step is a Pass/Fail Criteria. We name this variable "Pass/Fail Criteria" out of a lack of creativity. It could be anything you need, such as "Rise Time Test". The average rise time measurement generated in the first step will be found under GradientOne Measurements->Channel 1 (ch1)->average rise time.
We can then run this measurement suite against some data captured using a quick capture on our scope. We can find this data by searching for the result id. After a clicking run, the results of each step will be enumerated in the run modal.
"View Results" will take us to the results view, where there are two things to note. First, markers will be placed where transition edges were discovered. These markers can be cleared using either the "Clear Generated Analysis Results" or "Clear Markers from Trace" analysis suites.
Second, calculated results will appear below the plot in the "GradientOne Measurements" section:
Now that these calculations have been associated with the result id, they can be used when searching for results. When searching for results, special characters are removed, spaces are replaced with underscores, the channel name is appended to the front, and if the value is a boolean value instead of a number, it is appended with '_bool'. You can also use the structured query to build the query with the correct formatting.
After clicking on the check mark, this will get reformated as: ch1_average_rise<1e-6 , and running this query results in only one search, the result we just ran our analysis suite against.
Similarly, we can also search for where our pass/fail criteria passed:
Using the GradientOne analysis suite, it is easy to run a rise/fall times compliance test against batches of data and then filtering out the instances where it failed.
Reading in and generating Device Configuration Files (DCFs) is now available in GradientOne's CANOpen Editor. This post will demonstrate how to generate and read the file.
In the CAN editor, enter some frames, for example:
WRITE [0x00, 0x21, 0x46, 0x00, 0x00, 0x00, 0x59, 0x00, 0xB2, 0x00, 0x59, 0x00, 0xFE, 0xC5, 0xFD, 0x7F, 0xB2, 0x1C] to velocity_loop_output_filter on node 1
Note that you can also provide raw hex, but it must be provided as frames, one line at a time. Since the above write has more than 4 bytes, it must be expressed as a multi-frame SDO download. Entering the lines:
0x601, 0x21, 0x06, 0x21, 0x00, 0x12, 0x00, 0x00, 0x00
0x601, 0x00, 0x00, 0x21, 0x46, 0x00, 0x00, 0x00, 0x59
0x601, 0x10, 0x00, 0xB2, 0x00, 0x59, 0x00, 0xFE, 0xC5
0x601, 0x07, 0xFD, 0x7F, 0xB2, 0x1C
has the same affect as the one WRITE command above.
Press Add (1), then DCF (2), and then Download as DCF (3). Your browser will load a .Bin file (4).
Looking at this binary file in a hex editor (This is Bless for linux), we can see the number of SDOs to write (red), the address (blue), the number of bytes in this SDO (green) and the SDO data (orange).
We can now take this DCF and upload it in a new session. To do this, we need to press DCF (1), then Upload DCF (2), then add the file we just downloaded using Choose File (3). Specify which node this DCF will be written to (4), by default the DCF is written to node 1. Finally, upload the file (5).
After pressing upload, the frames will be added to the frame Queue:
Interfacing multiple devices with non-standardized communication protocols is a never-ending struggle. In the mid eighties, the US Air Force tried to create a universal translator between the instruments they used, but abandonned the project when it started sucking up significant development costs. HP, seeing an opportunity and having already made a lot of money by licensing its GPIB cable standard, invested in standardizing a communication protocol known as SCPI.
Though SCPI (often pronounced "skippy") enabled a lot of instrument functionality to be accessed with educated guesses, it did not guarantee that manufacturers were perfectly compliant with SCPI commands, and as more and more devices switch to USB interfaces, fewer and fewer devices allow string-based communication, instead relying on manufacturer drivers. This means that, until we're all living in our robot-controlled virtual reality future, looking up manufacturer's manuals and driver APIs will be a frequent chore.
While we've developed configuration forms for most common settings, such as triggering settings, offsets, and sampling modes, GradientOne recognizes that our default data capture configuration options will not be sufficient for all users, and so have developed an "Editor" where you can write simple scripts. To access it, click on the "Settings" drop-down next to the config or run button, and select "Editor".
When designing our command language, we tried to pull out the features we liked from previous languages. SCPI has an elegance to it in that it recognizes that the two most common operations done on instruments are retrieving values from the device - using the question mark (?) and configuring settings (no question mark). However, like Python, we want our language to be human readable. Thus, the three most common commands used in our command language are likely to be QUERY, SET, and WRITE, written as:
SET is used to put the instrument into specific states, such as switching between enabled and disabled, while WRITE is used to configure settings. Certain commands are instrument-specific. These commands are expressed in all-caps words, and are colored pink in the syntax highlighting.
Settings and states can be referenced by their addresses or abbreviations in the instrument manuals or APIs. Since we had to read the documentation in order to generate the data capture form, we can save you time by making human-readable versions of every state and setting.
Values can be expressed as lists or single items of integers or hex bytes. For example, 1000, [3, 255], 0x3e8 and 0x03 0xE8 are all valid ways of expressing values. Correctly parsed values are coloured purple in the syntax highlighting.
We have also provided a few expressions to chain and repeat commands: while, for, and if. These expressions follow the syntax of Python. For example:
Finally, each instrument has its own specific functionality. For example, in CANOpen devices that support trace tools, such as those developed by Copley, entering "WAIT FOR TRACE" will halt CANbus execution until the trace tool has acquired data; "DOWNLOAD" will acquire the trace data. On Tektronix devices, calling "DOWNLOAD", or "GET WAVEFORM" will transfer the current waveform to the current test run.
To see all available command types available to your current instrument, click on the "Show Cheat Sheet" link on the editor. You can also get hints as to valid completions of any line in the editor by moving your cursor to the end of the line and pressing ctrl+space.
Transparent boolean values form the basis of PLCs, the programming language that controls most manufacturing systems. This is because downtime in a factory can cost large amounts of money, and therefore faults need to be obvious in order to be quickly resolved. While individual components in modern assembly lines are increasingly complex, using computer vision, multi-axis manipulators and even self-driving delivery platforms, their output must ultimately generate a single True/False value in order to be compatible with the larger process. At GradientOne, we recognize that our product, with its focus on a broad range of capabilities for research and development, may only be used for a specific task as one component in a larger system.
Therefore, we have recently added the capability to run pass/fail criteria against previously collected test results. Pass/Fail criteria can be applied onto instrument-measured criteria, such as a oscilloscope-reported frequency of a repeating signal, or on previous GradientOne calculated measurement, such as a pattern match in a set of x-y set of data. This post will go over how to define a simple Pass/Fail criteria.
For this example, we will use pass/fail criteria pull out all the samples in our CAN database where the motor was moving for at least 300 microseconds. First, we have created two patterns on a sample trace, one when the velocity went from 0 to its maximum speed, and the second from when the velocity went from its maximum speed back to 0.
Next, we create a measurement suite. We give it the name Movement Pass/Fail, and it has two steps:
After saving, we can run the new suite by selecting the checkbox next to the name and clicking on Run Selected, selecting the results to run it against, and clicking on Run. The results will appear in the modal:
If we click on the results link, the results generated by this analysis suite under the plot:
We can also search through all data by that result. To find all the instances that passed, we type “Movement__03=1” into the search bar on the SEARCH tab. When converting to measurements to search indexes, spaces are replaced with underscores, punctuation is removed and Pass/Fail is converted to 1/0. As we can see, this returns 19 results:
Whereas searching for “Movement__03=0” returns no results, meaning no runs passed our test; all were moving for more than 0.3 seconds.
From this example we hope to have inspired you to create your own pass/fail criteria. There are lots of scenarios we did not cover, such as the voltage on a trace being outside a range, the rise/fall time of a square pulls being too large, or the presence or absence of a decoded byte in a digital IO trace.
In science, many discoveries are made by tracking anomalies in signals. In the LHC, finding an unusually large number of particles of a certain type in a high-energy collision's debris indicates the existence of a new type of particle. Pulsars were initially discovered by a careful study of a repetitive pattern in the radio waves hitting our planet. While you may not be probing the edge of known physics, tracking anomalies in your data can help you predict failure modes before they are discovered in the field. GradientOne's Pattern Matching analysis function can help you in your search for anomalies.
This blog post will demonstrate how to use Pattern Matching, and how it works.
Suppose we have the trace generated by a motor:
The blue line is the motor velocity and the yellow line is the position. While the position is increasing, the velocity is large and positive, and it drops down to zero once the target position is reached. The dip below zero occurs because the motor slightly overshoots its target. This is a pattern we might want to track - for example, does it always overshoot at this time stamp during the run? Does it always overshoot by the same amount? If we turn this section into a pattern, we can find all instances with the same phenomenon in our saved test results.
To create a new pattern:
The newly created pattern can be seen in the previews on the ANALYSIS page in the Patterns tab.
Creating and Running an Analysis
The newly-created end of movement pattern can be used to create an analysis suite. To create an analysis suite:
The new Suite should appear in the list. To run this suite against saved data:
The results page for any data captured in the run will have a new marker where the pattern matched. Hovering over this marker shows that the comment has been set to end of movement feature found, and that the author is GradientOne. The results page will also have three new views - Pattern Overlay will have the original pattern overlayed where it was matched, and Full Analysis has areas, convolutions and intersections. To remove these new traces and markers from the original data, run the Clear Generated Analysis Results suite on the data.
How it Works
Unlike other GradientOne analysis suites, the Pattern Search does not use any Machine Learning techniques, instead using pure calculus. We test for congruence - meaning, we check to see whether two shapes are roughly the same. If all of the points in shape A can fit within area of shape B, and the area of shape A is the same as shape B, then shape A must be the same as shape B.
In practice, this means that we look for intersections between the area under the trace curve and the convolution of the trace and target. The animated gif below shows this calculation in action.
In this graph, the blue line is the trace being searched for the target pattern defined by the orange line. The green line represents the area under the blue curve at the current scan location of the target pattern (the area of the graph covered by the green hatch pattern) and the red line is the convolution between the trace and target pattern (the area of the graph covered by the red hatch pattern). At the location of intersection, the target matches the trace, which we highlight with a vertical line.
You can see the area and convolution curves, and where they intersect by selecting the Full Analysis:
Automated pattern matching techniques like this eliminate the need for engineers to use cursors, markers, and other manual tools to find the needle in the haystack and calculate measurements. Whether on the R&D bench or the production floor, cloud powered analytics are becoming a key asset in the pursuit of higher quality, better products, and increased engineering efficiency.
By: Catherine H., SW Developer at GradientOne
In the last post, we covered how to use the GradientOne CAN bus editor and the settings to acquire data by polling, as well as a brief introduction to PDO's and SDO's. In this post, we'll cover running the same test with the Copley controller's trace tool, which involves reading a multi-packet SDO.
The Trace Tool
The trace tool allows data to be captured at a regular interval and buffered within the Copley motor controller, then downloaded as a large SDO. The trace is set up by setting writing several SDO's. Like as before, we're going to collect the position and velocity:
# set all channels to empty WRITE 0x0000 to trace_channel_1 on node 1 WRITE 0x0000 to trace_channel_2 on node 1 WRITE 0x0000 to trace_channel_3 on node 1 WRITE 0x0000 to trace_channel_4 on node 1 WRITE 0x0000 to trace_channel_5 on node 1 WRITE 0x0000 to trace_channel_5 on node 1 # set trace channel 1 to actual load position (0x001c) WRITE 0x1c00 to trace_channel_1 on node 1 # set trace channel 2 to actual motor velocity (0x0017) WRITE 0x1700 to trace_channel_2 on node 1
Next, configure the trace period and trigger. This trace period is defined in terms of reference periods, which is a factor of the smallest time interval and the total number of samples in the trace. The trace can be triggered off of several inputs, but in this example we will trigger off of a CAN packet:
# set the trace period to the 1.0 (1.0/6.7e-6 trace period *512 max samples*5) WRITE 0x1d00 to trace_period on node 1 # set the trace trigger configuration to a CAN packet trigger (0x000000000000) WRITE 0x000000000000 to trace_trigger_configuration on node 1
Like in the previous example, we'll tell the motor to move to 60,000 steps:
# Set position mode to target position (0x01) WRITE 0x01 to mode_of_operation on node 1 # Set target position to 60,000 (0xc0270900) WRITE 0xc0270900 to trajectory_generator_position_command on node 1 # Set control word bit 4 (address 0x6040) to move (0x3f00) WRITE 0x3f00 to control_word on node 1 # Set control word bit 4 to done (0x2f00) WRITE 0x2f00 to control_word on node 1
We next trigger the trace data collection:
# send the trace trigger packet to 0x0001 WRITE 0x0001 to trace_trigger on node 1
Now, we want to keep asking the controller for the number of samples created until we get the number of values required. We can accomplish this by the use of the GradientOne while shorthand:
# request the trace sample count QUERY trace_sample_count on node 1 # repeat this request while the trace sample count is less than 512 while frames[-1].frame_data < [0x03,0x25,0x00,0x00,0x02,0x00,0x00]: QUERY trace_sample_count on node 1
GradientOne provides an alternate shorthand for these command, as remembering it can be cumbersome. Instead of the five lines above, you simply put:
WAIT FOR TRACE
To aid in the data reconstruction, it is helpful to query the trace reference period. If the trace reference period is not in the CAN frames for this test run, GradientOne will assume that it is one, leading to an inaccurate time scale:
# read in the trace_reference_period QUERY trace_reference_period on node 1
Finally, stop the trace so that the data can be downloaded:
# send the trace trigger packet to off (0x0000) WRITE 0x0000 to trace_trigger on node 1
Downloading Large Data from an SDO
The total size of the trace data is be 2 properties * 4 bytes per property * 512 samples = 4096 bytes. Since the a single CAN packet can transmit up to 8 bytes, multiple bytes must be exchanged in order to recreate all of the data. CANopen handles the acknowledgement by sending packets with alternating the first byte between 0x70 or 0x60:
# request the trace data (0x2509) WRITE to trace_data on node 1 while frames[-1].data == 0x10 or frames[-1].data == 0x00: 0x601,0x60 if frames[-1].data == 0x10 0x601,0x70
GradientOne also provides a shorthand for downloading trace data. Instead of the four lines above, you can use:
The advantage of using this DOWNLOAD command is that it is done at the client, thus the resulting frames are buffered and the download process is significantly faster than writing out a while loop.
GradientOne can stitch the downloaded byte into arrays of values and plot them. If we enter the full program into the editor like we did in part 1, GradientOne will automatically pick out the data:
Trace through the GradientOne Trace Tool
As like the poll in part 1, GradientOne also has simplified acquiring a trace of data from the EDS file. Like in part 1, fill out the config file, but this time select "trace" as the DAq method, and select "actual_load_position" and "actual_motor_velocity" as the parameters.