GradientOne
  • Home
  • Solutions
    • Overview
    • Test Engineering
    • Compliance Labs
    • Product Features
    • Supported Instruments
  • Documentation
    • White Papers
    • Getting Started
    • Quality Analysis of Test Data
    • Rigol Automation
    • Waveform Upload
    • Visualization & Measurements
    • News
    • Case Study
  • Try For Free
  • Pricing
    • Buy Now
    • Pilot
  • RIGOL
  • Blog

Blog

Web interface for uploading results

12/28/2017

0 Comments

 
GradientOne has two types of result entries:
  • Result entries typically come from  contain time-series plottable data in a channels object, as well as individual pieces of metadata, such as the instrument type, config values and timebase range:
{
  "info": {
    "instrument_type": "RigolDS1054Z",
    "channels": [
      {
        "name": "chan1",
        "start_time": 0,
        "enabled": true,
        "trigger_level": 1.52,
        "offset": 0.48,
        "time_step": 4e-07,
        "y_values": [
          0.0438,
          0,
          0.0438,
          ...
        ],
        "coupling": "dc"
      },
      ...
    ],
    "config_excerpt": {
      "timebase": {
        "position": 0
      },
      "enabled_list": [
        "chan1",
        "chan2"
      ],
      "channels": [
        {
          "name": "chan1",
          "enabled": true,
          "range": 8,
          "offset": 0.48,
          "input_impedance": 1000000,
          "coupling": "dc"
        },
        ...
      ],
      "trigger_edge_slope": "positive",
      "trigger": {
        "source": "chan1",
        "type": "edge",
        "coupling": "dc",
        "level": 1.52
      },
      "acquisition": {
        "record_length": 6000,
        "start_time": -0.0012,
        "time_per_record": 0.0024,
        "type": "normal",
        "number_of_averages": 2
      }
    },
    "timebase_scale": 0.0002,
    "h_divs": 12,
    "slice_length": 6000,
    "timebase_range": 0.0024,
    "num_of_slices": 1
  },
  "slice_length": 6000,
  "timebase_range": 0.0024,
  "num_of_slices": 1,
  "date_created": "2017-12-15T17:50:36.042800Z",
  ...
}
  • Meta Result entries are generated by running a Meta Analysis. Metadata will be compiled into a dataframe object, where metadata are grouped into lists:
{
  "info": {
    "dataframe": {
      "instrument_type": [
        "RigolDS1054Z",
        "RigolDS1054Z",
        ...
      ],
      "slice_length": [
        6000,
        5000,
        ...
      ],
      "date_created": [
        "2017-12-15T17:50:36.042800Z",
        "2017-12-15T12:30:26.000000Z",
        ...
      ],
      ...
    }
  }
}
With the Result entities, you can visualize and perform measurements in the timeseries data, such as looking forpatterns in data, measuring rise/fall times, and pass/fail criteria.

With the Meta Results entries, you can look for explanations as to how the presence of these patterns, or the rise/fall times, or the configuration information affects whether the result passed or failed by some other criteria. For example, it might be that the presence of a peak at a specific location in the timeseries is correlated with failure, or that failed results are more likely to come from a specific test rig.

In addition to collecting data from GradientOne-integrated testrigs, you can upload data through the GradientOne web interface. To access it, go to /uploads. We support data in JSON, xls, xlsx and csv format. In JSON, the data must be either in the info/channels format, either as a single object or in an array of multiple objects. 

After adding a supported file, the page will attempt to interpret the data:
Picture
By default, the file will be iterpreted as being a table where every row is a the metadata of a single entry. In this example, Result 1 will be uploaded as:
"info": {
    "Average Cell Current (mA)": 4.8,
    "Average Cell Gain (dB)": 3.1,
    "Result": "Pass",
    "Max Cell Temperature (C)": 0.2,
    "Average Cell Temperature (C)": 1.6
},
If you instead select Rows are channels of a single result, the data will be uploaded as a single result, where Channel 1 is row 1:
{
  "info": {
    "channels": [
      {
        "name": 0,
        "y_values": [
          1.4,
          5.1,
          0.2,
          ...
        ]
      },
      {
        "name": 1,
        "y_values": [
          1.4,
          4.9,
          0.2,
          ...
        ]
      },
      ...
    ]
  }
}
If you instead select Rows are single channels of multiple results, the data will be uploaded as multiple results, where Result 1 is row 1:
{
  "info": {
    "channels": [
      {
        "name": 0,
        "y_values": [
          1.4,
          5.1,
          0.2,
          ...
        ]
      }
    ]
  }
}
Result 2 will be: 
  "info": {
    "channels": [
      {
        "name": 0,
        "y_values": [
          1.4,
          4.9,
          0.2,
          ...
        ]
      }
    ]
  }
}
If you select any of the Columns options, the columns will be painted the same instead of the rows:
Picture
If you select Columns are result metadata entries, Result 1 will be:
{
    "info": {
        0: 1.4,
        1: 1.4,
        2: 1,3,
        ...
    }
}
If you select Columns are channels of a single result, only one result will be created:
{
    "info": {
        "channels": [
            {"name": "Number", 
             "y_values": [0, 1, 2, ...]},
            {"name": "Average Cell Temperature", 
             "y_values": [1.4, 1.4, 1.3, ...]},
            ...
        ]
    }
}
If you select Columns are the single channels of multiple results, then Result 1 will be:
{
    "info": {
        "channels": [
            {"name": "Number", 
             "y_values": [0, 1, 2, ...]},
        ]
    }
}
and Result 2 will be:
{
    "info": {
        "channels": [
            {"name": "Average Cell Temperature", 
             "y_values": [1.4, 1.4, 1.3, ...]}
        ]
    }
}
Make sure that you add a config name so that you can find your uploaded results later. Click Submit when ready. After uploading, links to the generated results will appear in the Link column.
Picture
0 Comments

Measuring the Effect of Factors with Principal Component Analysis

12/20/2017

0 Comments

 
​Principal Component Analysis is a way of determining how important different factors are on a dependent variable, and whether they are positively or negatively correlated. This article will explain how to do PCA on GradientOne's platform and go through interpreting the results.

A manufacturer of LED TVs wants to investigate the causes of pixel failure in their TVs. If there are no dead pixels in a screen, the screen is tagged as a Pass. If a single pixel on a screen is dead, it is tagged as a Warning, because most people won't notice a single dead pixel. If more than one pixel on the screen is dead, it is tagged as a Fail.

During the testing phase, the gain (meaning how much current is produced for the incident light intensity), current, and the temperature for each pixel is measured. These results were generated by running a configuration called "pixel". The average gain, average current, average temperature and max temperature are stored in the metadata section of each result:
"info": {
    "Average Cell Current (mA)": 4.8,
    "Average Cell Gain (dB)": 3.1,
    "Result": "Pass",
    "Max Cell Temperature (C)": 0.2,
    "Average Cell Temperature (C)": 1.6
},
There are multiple factors here (Average Cell Current, Gain, Temperature) that lead to a single result (Pass/Fail/Warn). Some or all of these factors may contribute with different strengths, and some might be negatively correlated with the result, i.e., maybe a high max temperature but low average temperature lead to failures. A pretty good first guess would be to say that all factors contribute equally, i.e.:

Result = Average Cell Current + Average Cell Gain + Max Cell Temperature + Average Cell Temperature

If we were draw the histogram of this calculated "Result", it would look like this:
Picture
​The X coordinate here is our sum result, and the y coordinate is the number of results with that sum. This histogram shows that our first guess was okay - the Passes are lumped to the left and the Failures to the right. There is some underlying signal in our data, but it would be hard to draw a dividing line between where the warnings end and the failures begin. Principal Component Analysis allows us to optimize the coefficients on the factors in the equation above such that we can easily separate the three results. Generating the same histogram for our PCA-optimized equation results in:
Picture
​In this histogram, there is less overlap between Pass, Fail and Warn, and the Pass results are even further from the Fail results. PCA will generate as many of these equations as it has factors. Instead of histograms, GradientOne chooses to present PCA results as a scatterplot of the first component vs the second so that we can take advantage of two dimensions to show even further separation between clusters.
​
To run PCA, click on Principal Component Analysis on the Analysis page.
Picture
​On the modal, enter pixel in the Select Data box, and wait for the modal to show showing results for "pixel". Then click on Select All:
Picture
​You will need to provide additional input: the dependent variable. In this case, the dependent variable is the pass/fail Result:
Picture
​Then scroll all the way down the modal to the run button. The gears will turn as the metadata is compiled, and a link to the meta result will appear after the command is complete:
Picture
​Principal Component Analysis generates a list of transforming equations that attempt to explain the spread of values in a set. PCA will generate as many component equations as there are input factors. In this case, there are 4 factors (Current, Gain, Max Temperature and Average Temperature), so four equations (pca1 through pca4) are generated. Equations are sorted by how much of the spread they explain. The amount of spread explained is posted below the plot:
Picture
​The largest factor in pca1 is the Average Cell Temperature, at 0.864. The next largest factors are about equal: Max Cell Temperature and Average Cell Current. The factor on Average Cell Gain is small and negative. A reasonable interpretation is then that the Average Cell Temperature can explain most of the warnings an failures, that Average Cell Current and Max Cell Temperature also have some explanatory power, and that Average Cell Gain is not important. The chart above the equations shows the data with the plotting axes transformed to pca1 and pca2:
Picture
The important things to note are that:
  1. the data is roughly vertically aligned (there is no tilt upwards or downwards in the data) meaning that there is no other factor that hasn't been accounted for pca1 or pca2
  2. you could imagine drawing two vertical lines that would nicely partition the passes from the warnings and failures, meaning that pca1 is able to explain most of the differences between passes, warnings and failures.
  3. failures are on the positive side of the graph, meaning that larger Average Cell Temperatures result in failures.
0 Comments

Investigating Failures with Categorical Metadata

12/11/2017

0 Comments

 
GradientOne provides several tools for performing meta-analyses of results. As devices become increasingly complex, the point or points of failure can be harder to identify. Incorporating data mining techniques such as machine learning or exploration into the process of brining new products to market can help at multiple stages in the pipeline. It can help in the research and development phase by predicting the possible avenues of implementation most likely to be fruitful. It can help in the manufacturing stage by identifying faulty equipment before the final steps in the process, thus saving operation costs and time, and it can help in the market stage by identifying devices that need to be recalled or patched before failures are reported. This first post will go over how to visualize and explore categorical data with this hypothetical example.
​

A robotics company wants to investigate the cause of failures. They have iterated the design of their robots over the years, such that although each robot is running the same software and has the same chassis, the components may have changed. For example, some robots have Nylon Supportsinstead of aluminum ones, some robots have a Copper Heatsink on the CPU, some robots have Mecanum Wheels instead of regular ones, and some robots have Lithium Batteries instead of Ni-Cad batteries. The robotics company has a record of all of the components that went into each of their robots, and uploads each part manifest as a result, where a 0 means that that part is not in the robot, and 1 means that part is in the robot:
"info": {
    "Nylon Supports": 1,
    "Copper Heatsink": 0,
    "Result": "Pass",
    "Lithium Battery": 0,
    "Mecanum Wheels": 1
},

Since this data is categorical, and not numerical, looking at the scatterplots does not yield useful results:
Picture
Instead, the Decision Tree meta analysis is the best tool for categorical data. Since the Decision Tree analysis is a supervised learning analysis, whereas the Scatterplot Matrix was an unsupervised learning analysis, the test engineer will need to provide additional input: the dependent variable. In this case, the dependent variable is the pass/fail Result:
Picture
View Results takes you to the Decision Tree. Two trees are generated: the Optimal Tree, that separates the most passes from failures using the fewest decision points:
Picture
From this tree, we can see that the combination of Mecanum wheels and a copper heatsink or a lithium battery results will likely result in a failure, and that the combination of regular wheels, alumnium supports, and a copper heatsink also leads to failures. Scrolling down to the partition plot, we can see that this tree has a good partition of passes and failures: there are relatively few passes in a section with a lot of failures, and vice-versa:
Picture
The other tree, which can be seen by changing the value in the value in the dropdown. The Symmetric Tree expands all the parameters in the same order. This can be used for data exploration:
Picture
The partition plot for the symmetric graph has more partitions than the optimal tree, but still has a good separation between the passes and failures. From this graph, we can see that the mecanum wheels are the primary cause of failures, though it's harder to tell how the other components contribute:
Picture
0 Comments

Investigating Failures with Numerical Metadata

12/11/2017

0 Comments

 
GradientOne provides several tools for performing meta-analyses of results. As devices become increasingly complex, the point or points of failure can be harder to identify. Incorporating data mining techniques such as machine learning or exploration into the process of brining new products to market can help at multiple stages in the pipeline. It can help in the research and development phase by predicting the possible avenues of implementation most likely to be fruitful. It can help in the manufacturing stage by identifying faulty equipment before the final steps in the process, thus saving operation costs and time, and it can help in the market stage by identifying devices that need to be recalled or patched before failures are reported. This first post will go over how to visualize and explore numerical data with this hypothetical example.

A manufacturer of LED TVs wants to investigate the causes of pixel failure in their TVs. If there are no dead pixels in a screen, the screen is tagged as a Pass. If a single pixel on a screen is dead, it is tagged as a Warning, because most people won't notice a single dead pixel. If more than one pixel on the screen is dead, it is tagged as a Fail.
​
During the testing phase, the gain (meaning how much current is produced for the incident light intensity), current, and the temperature for each pixel is measured. These results were generated by running a configuration called "pixel". The average gain, average current, average temperature and max temperature are stored in the metadata section of each result:
"info": { 
    "Average Cell Current (mA)": 4.8, 
    "Average Cell Gain (dB)": 3.1, 
    "Result": "Pass", 
    "Max Cell Temperature (C)": 0.2, 
    "Average Cell Temperature (C)": 1.6 
},

Since this data is numerical, the first thing a test engineer can do to investigate the cause of failures is to look at the scatterplots. On the Analysis page, the test engineer would click the checkbox next to Scatterplot Matrix, under Meta Analysis Suites and then on Run Selected
Picture
On the modal, enter pixel in the Select Data box, and wait for the modal to show showing results for "pixel". Then click on Select All
Picture
Then scroll all the way down the modal to the run button:
Picture
The gears will turn as the metadata is compiled, and a link to the meta result will appear after the command is complete:
Picture
The View Results link will take you to the scatterplot. The factors that appear on the x and y axis can be changed using the drop-downs to the right of the chart:
Picture
By selecting the gain and current values as axes, we can see a cluster of passes around the high-gain low-current corner of the plot, but no clear cluster distinction between the warnings and failures:
Picture
However, when the axes are changed to average an max temperature, all three clusters become apparent:
Picture
From this, the test engineer can conclude that although there are correlations between gain, current, and passing screens, the real cause is likely to be the temperature.
0 Comments

CAN Logger

12/4/2017

0 Comments

 
GradientOne provides a CAN packet sniffer, like wireshark or Copley's CANView. The CAN logger can capture all CAN frames that are written and read to the CAN interface connected to an attached interface. It can pick up frames generated by a configuration run, but the frames will only be available after the configuration has finished. The gateway client must first be in a "Ready" state before the logger can be started. This allows you to understand what is being done by the GradientOne movement configuration, or keep a record of all frames collected.
Go to /canlogs, and click on the Start button. The button will disappear while the web interface instructs the gateway client to start sending frames:
Picture
The Start button may become available before or after new frames appear in the table, depending on what commands are also in the queue. After clicking the Start button, it will remain on until the Stop button is clicked. Stop and Clear also behave like Start: they send commands to the gateway client. Once Start is pressed, you should start seeing the frames sent by the status checker. If there are no commands in the queue, the status checker will run approximately once per minute. The status checker turns on heartbeats for each node and then queries the registers via an SDO, so you will see heartbeats and SDO responses in the incoming frames:
Picture
​You can use the "Filter to:" selection dropbox to view only SDO responses:
Picture
Or, you can filter out frames generated by the status checker by clicking on Exclude Status Frames:
Picture
​In another window, you can run frames in the editor, such as initiating an SDO read of the motor's position:
Picture
​After the editor config completes, these frames will show up in the CAN logger:
Picture
​Logs can either be Downloaded as an CSV, or saved on the GradientOne instance by clicking on Save. In both cases, the log will only be the frames that are on the screen. The Logs will be saved as the unix timestamp of when the logs were saved:
Picture
​Once a log is loaded, you can share it with others who have access to the GradientOne instance by copying and pasting the URL.
Picture
0 Comments

    Archives

    April 2020
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    July 2016

    Categories

    All
    Instrument
    Usability

    RSS Feed


​Home
News
Jobs
​
Privacy Policy
Contact Us
OUR MISSION - 
GradientOne’s mission is to improve the work of engineers, manufacturing organizations, technical support teams, and scientists, by using cloud computing to streamline instrument management, data collection, analysis, reporting and search.
        ©2019 GradientOne Inc.  All Rights Reserved.
  • Home
  • Solutions
    • Overview
    • Test Engineering
    • Compliance Labs
    • Product Features
    • Supported Instruments
  • Documentation
    • White Papers
    • Getting Started
    • Quality Analysis of Test Data
    • Rigol Automation
    • Waveform Upload
    • Visualization & Measurements
    • News
    • Case Study
  • Try For Free
  • Pricing
    • Buy Now
    • Pilot
  • RIGOL
  • Blog