GradientOne
  • Home
  • Why GradientOne?
    • How It Works
    • Product Features
  • Solutions
    • Test Engineering
    • Compliance Labs
  • Resources
    • API
    • Blog
    • News
    • Documentation
  • Free Trial Signup

Blog

Automated Measurements:             A Day In The Life OF A clock Glitch

3/25/2018

0 Comments

 
Debugging glitches is a common task for engineers.  Modern oscilloscopes have good tools for triggering on a glitch, but what happens next can be cumbersome, time consuming.  We'll show you how GradientOne's automated measurements simplifies the process, turning a multi-hour exercise into an exercise that takes a few minutes.  

 Oops You Detect A Glitch

The first thing to do when you detect a glitch is try to acquire the signal for further analysis.  This can be done by setting up your scope with the appropriate trigger (glitch, runt, etc).  Now that you have the signal captured, what do you do with it?  Next steps typically include analyzing the signal further, performing some measurements, involving other members of your team for debug.  This might entail:
  1. downloading the waveform data from the instrument
  2. plotting the results
  3. calculating measurements
  4. save the screenshot
  5. cut and paste the items above in an email, circulate with your team, etc
  6. store the data somewhere in case need these results to compare against future tests

There are ways to do these tasks using features on the scope, writing a software script, or a bit of both.

But they take time and programming expertise.

GradientOne Automation

The GradientOne approach to test and measurement automation was designed to address all of those steps each time a test is run.  

In this case, the you can still configure your oscilloscope manually, or you can configure it over the web.  Once it is setup, you click the run button from the GradientOne web interface, and the test is run, automating steps 1-6, and providing a range of tools for post-acquisition analysis.


Demo of Automated Acquisition:
 

After the data is stored, you can easily use the the mouse and the GradientOne cursor feature to perform arithmetic calculations such as measuring the peak amplitude of the glitch.  In the example below, the glitch is approximately 0.89 Volts.
Picture
The time savings for automating simple tasks is impactful on an individual level and transformative on an organization wide level.  Compressing tasks that take an hour or two to a few minutes, several dozen times, during the course of a project, for a team of engineers, may mean the difference between shipping early, on time, or late.  Automation with a cloud approach as the backbone can help make this happen.
0 Comments

Instrument Discovery

2/23/2018

0 Comments

 
A common challenge I see in many lab environments is a limited understanding of what test equipment exists in the lab.  In the best case scenario, someone maintains a spreadsheet containing instrument model, vendor, serial number, calibration date, etc.  But let's face it:  maintaining an Excel file with what equipment is being used, what is gathering dust, what is being rented, what is being loaned out to a partner, etc.... is hard to keep up with, and will likely be inaccurate in the due course of time.  
Picture
What is the best way to track these assets?
GradientOne developed a feature to automate discovery and utilization of test equipment (you can read about our utilization feature here).   Customers simply install our Discovery agent on the same network as their lab.  Our Discovery agent subsequently monitors traffic and when it senses a new piece of test equipment on the network, it characterizes it, uploads information to the customer's GradientOne web portal, and allows the user to register the device for tracking and utilization.

Take a look at the below video that shows our Discovery feature in action.  

​
0 Comments

TEST RIG Utilization For The Lab AND Manufacturing Floor

2/11/2018

3 Comments

 
Picture

View Utilization By Lab Location, Type, Product

Schedule Use Of Test Systems To Optimize Ops
​

Use Trend Information To Plan New Purchases


      Optimal utilization of test systems drives more cost effective operations in the test lab and improves visibility for budgeting and future capital expenditures.  The challenge is that many lab environments aren’t able to easily collect data, track usage, and make the necessary steps to implement data driven tools to help balance usage and optimize their lab.  GradientOne’s Test Rig Tracker & Scheduler is designed to help customers make decisions to improve their testing throughput while managing spend on test infrastructure.
      
Our approach to building this solution is guided by three basic principles:
  1. Simple deployment
  2. Do not disrupt the Test Engineer’s existing workflow
  3. Kill multiple birds with one stone:  help both the engineering and finance team​
      This blog post will provide background into how it works, how to use it, and the various problems it can solve for your test infrastructure.

​
How It Works
      Getting started is simple.  An engineer logs into their GradientOne web interface (Figure 1), registers a Test Rig with the following information:
  • Test Rig Name
  • Test Rig Type
  • Location
  • Lab
  • Description
All of this information is indexed and made available for tracking, reporting, and trend analysis.

Picture
Figure 1
After a Test Rig is registered, the GradientOne Agent is provisioned and installed on the Test Rig.  The Agent operates in the background.  It activates itself upon Test Rig bootup and reports to the GradientOne cloud platform usage of the test rig and provides integration into the scheduling system, with no involvement required by the Test Engineer.
Picture
Figure 2

The Summary page (Figure 2) provides a global, comprehensive view of all Test Asset utilization information.  Filter and sort to customize views (Figure 3) based off test rig type, location, product line usage, and more.
Picture
Figure 3
View Test System utilization trends to plan for future purchases.  Figure 4 shows increasing weekly utilization of a test system, alerting Engineering management of the need to procure a new system, aligning capital expense with business demand.
Picture
Figure 4
Picture
Figure 5
Scheduling is integrated to book Test Rig use for your teams.  Test Engineers can check in/checkout test rigs.  Test Labs can book Test Rig time and allocate it to specific customer engagements.
3 Comments

PCA with Recipes

2/7/2018

0 Comments

 
I recently bought a berry crumble from Wal-Mart that didn't live up to my expectations: it had way too much sugar and no oatmeal or cake batter, so the entire crumble went into the compost after a few bites. Was it that I had confused crumble with cobbler? In order to prevent wasting another 5 dollars in the future, I did what any data scientist would do: run Principal Component Analysis on text-mined recipes from the internet. I formatted the data into a table, where the first column is whether either crumble or cobbler appears in the title, and then each column is either 1 or 0 for whether the word in the header appears in the recipe's text. I uploaded and ran PCA as described in a previous post. The scatterplot of component 1 vs component 2 looks like:
Picture
From the scatterplot, there's no clear distinction between the two, except for the cluster of cobblers on the lower right-hand side. The first component is able to separate a few cobblers from the rest of the recipes using the equation: 

pca1 = drink*0.120 +  alcoholic*0.103 + quail*0.081 + christmas*0.072 + liqueur*0.063 

Turns out there is a type of cocktail called a "cobbler", and that is what this first component is successfully separating out of the recipe set. The next component is: 

pca2 = bake*0.353 + fruit*0.333 + gourmet*0.307 + dessert*0.217

This component is pulling out that cobblers are slightly more likely to be baked than crumbles, and are more likely to contain fruit (as opposed to vegetables).  However, unlike the alcoholic cobblers, there's no clear dilineation between the two. But now that I've written the script for grabbing recipes and creating these tables, why not attempt to ask another thing I've wondered about - what is the difference between a lunch food and a breakfast food? Doing the same process, the scatterplot looks like:
Picture
It's possible to separate a lot of the lunch recipes from the breakfasts, but almost all the breakfasts overlap with the lunch space and so could be considered lunches. The first component is:

pca1 = oyster*0.390 + shrimp*0.365 + low cholesterol*0.240 + prune*0.208+ celery*0.202 + dried fruit*0.202 + parsley*0.149 + seafood*0.113 

So seafood and low-cholesterol are indicators of a recipe being a lunch food and not a breakfast food. This seems right; eggs are not a low-cholesterol food, and there are few breakfast foods that involve seafood. Unlike in the crumble vs cobbler example, this first component explains far more of the variance than the next components. 

PCA can explain similar questions across many domains. Instead of identifying the groups of ingredients that define one different types of food, you might be identifying the common words in the human-written feedback and descriptions of failures. Or, you might be looking at the space of all descriptions of other products to find new potential products, or products that may be most familiar. For example, we might take our knowledge that breakfasts don't have seafood to open a seafood restaurant that is open in the morning in order to take advantage of an untapped market.
0 Comments

Setting up the Sandwich Factory: TestRigs and Utilization

1/11/2018

0 Comments

 
This blog is the first in a series on how to integrate equipment and tasks into GradientOne's API. These blog posts will use the hypothetical example of robots making shape sandwiches.

In the last post in this series, we covered generating TestPlans and Commands. Now that we have commands in the queue, we need to generate TestRigs that can execute the commands, and so that we can record the utilization.

In the sandwich factory, we have stations and robots. The stations can produce only one type of shape, but in any color. The robots move the stack of shapes around from station to station, and deliver them to the delivery zone.
Picture
Now that we have a list of commands with TestPlans, we need to find and reserve the appropriate stations and robots to complete each stack. These stations and robots will be bundled into individual TestRigs. One strategy would be to make our Testrigs contain one station of each type and a robot. However, since we have ten robots and 3 stations of each type, we would only be able to create 3 TestRigs. Since not every sandwich stack needs a station of every type, we could have more operations happening concurrently if we dynamically created TestRigs based on need.

To create a TestRig, make a post to /testrig:
from json import dumps
from urlparse import urljoin
from requests import session

equipment_ids = ["s1", "s8", "r1"]
data = {"name": "robots"+"".join(equipment_ids), 
        "location": "robotsandwichdemo",
        "description": "robot in robot demo", 
        "equipment_ids": equipment_ids}
response = session().post(urljoin(BASE_URL, "/testrigs"), 
                          data=dumps(data), headers=HEADERS)
# get the unique id for this testrig
testrig_id = response.json()["id"]
Once this testrig is created, we can record that this TestRig is in use by making a post to utilizations:
from datetime import datetime

now = datetime.now()
data = {"testrig_id": testrig_id, 
        "start": now.strftime("%Y-%m-%d %H:%M"), 
        "duration": 100, "ignore_in_use": False}
response = session().post(urljoin(BASE_URL, "/utilizations"), 
                          data=dumps(data), headers=HEADERS)
When the sandwich stack is complete, we can record that this TestRig is no longer in use by making a new post to utilizations:
now = datetime.now()
data = {"testrig_id": testrig_id,
        "end": now.strftime("%Y-%m-%d %H:%M"),
        "ignore_in_use": True}
response = session().post(urljoin(BASE_URL, "/utilizations"),
                          data=dumps(data), headers=HEADERS)
When making  our start utilization post, we set ignore_in_use to False, so that we don't double-book a testrig. When making our end utilization post, we set ignore_in_use to True, because we previously told utilizations that the TestRig was going to be used for 100 minutes, so we are interrupting our previous booking.

Now that we have logged our utilizations, we can look at the data aggregation to look for ways to optimize our sandwich factory. We'll cover this in the next post.
0 Comments

Setting up the Sandwich Factory: TestPlans and Commands

1/11/2018

0 Comments

 
This blog is the first in a series on how to integrate equipment and tasks into GradientOne's API. These blog posts will use the hypothetical example of robots making shape sandwiches.

TestPlans are recipes for equipment tasks. Like recipes, they have a list of steps, and each tesp may require different combinations of equipment. For example, testing the odometer might consist of the steps:
  1. zero both odometer A (which is known to be accurate) and odometer B (with unknown accuracy)
  2. command an attached motor to move at 70 miles per hour for an hour
  3. read both odometer A and odometer B
  4. run a pass/fail criteria analysis on whether odometer A is equal to odometer B within a value of epsilon
In the sandwich factory, the steps are 2-integer arrays where the first integer is the color (0 = brown, 1 = green, 2 = gray, 3= yellow), and the second is the shape (0 = square, 1 = triangle, 2 = circle). Steps don't have to be an array; they can be any JSON object. To create a testplan, make a post to /testplans. Here is some sample python for making this post:
from json import dumps
from urlparse import urljoin
from requests import session

data = {"name": "sandwich1", "description": "sandwich 1", "steps": [[0, 1], [0, 2], [3, 1]]}
response = session().post(urljoin(BASE_URL, "/testplans"), data = dumps(data), headers=HEADERS)
# get the unique id for this testplan
testplan_id = response.json()["id"]
BASE_URL and HEADERS need to be specified by you, they can come from your /etc/gradient_one.cfg file, if you're running a gateway client. Upon successful completion of the post, it will respond with a generated id associated with that TestPlan. With this id, we can add it to the queue of commands to be executed by making a post to commands:

data = {"arg": testplan_id, "category": 'Plan', "tags": ['testplanDemo']}
response = session().post(urljoin(BASE_URL, "/commands"), data=dumps(data), headers=HEADERS)

This command has been tagged with testplanDemo so that we can easily filter the commands related to the sandwich factory. If you have multiple gateways pointing to the same web instance, you can route commands using the gateway argument. We can pull all pending commands using a get request to commands:
params = {"status": "pending", "tag": "testplanDemo"}
response = session().get(urljoin(BASE_URL, "/commands"), 
                         params=params, headers=HEADERS)
commands = response.json()["commands"]

The client controlling the robots and setting task allocations can get the TestPlan by querying the id that comes with the command:
params = {"id": commands[0]["arg"]}
response = session().get(urljoin(BASE_URL, "/testplans"),
                         params=params, headers=HEADERS)
steps = response.json()["steps"]
Once the client has found a set of equipment IDs that can perform the task, we can set the command to in progress, so that it will no longer show up in the command queue and won't be assigned twice. 

data = {"command_id": commands[0]["id"], "status": "in progress"}
response = session().post(urljoin(BASE_URL, "/commands"),
                          data=dumps(data), headers=HEADERS)
Once the command is completed, you can use the same post to set the status to complete. In our demo, after creating the TestPlans and adding them to the command queue, we generate this table by querying the pending command queue and then querying the individual testplans to get the steps:
Picture

In the next post, we will cover how to create TestRigs and record their utilization.
0 Comments

Web interface for uploading results

12/28/2017

0 Comments

 
GradientOne has two types of result entries:
  • Result entries typically come from  contain time-series plottable data in a channels object, as well as individual pieces of metadata, such as the instrument type, config values and timebase range:
{
  "info": {
    "instrument_type": "RigolDS1054Z",
    "channels": [
      {
        "name": "chan1",
        "start_time": 0,
        "enabled": true,
        "trigger_level": 1.52,
        "offset": 0.48,
        "time_step": 4e-07,
        "y_values": [
          0.0438,
          0,
          0.0438,
          ...
        ],
        "coupling": "dc"
      },
      ...
    ],
    "config_excerpt": {
      "timebase": {
        "position": 0
      },
      "enabled_list": [
        "chan1",
        "chan2"
      ],
      "channels": [
        {
          "name": "chan1",
          "enabled": true,
          "range": 8,
          "offset": 0.48,
          "input_impedance": 1000000,
          "coupling": "dc"
        },
        ...
      ],
      "trigger_edge_slope": "positive",
      "trigger": {
        "source": "chan1",
        "type": "edge",
        "coupling": "dc",
        "level": 1.52
      },
      "acquisition": {
        "record_length": 6000,
        "start_time": -0.0012,
        "time_per_record": 0.0024,
        "type": "normal",
        "number_of_averages": 2
      }
    },
    "timebase_scale": 0.0002,
    "h_divs": 12,
    "slice_length": 6000,
    "timebase_range": 0.0024,
    "num_of_slices": 1
  },
  "slice_length": 6000,
  "timebase_range": 0.0024,
  "num_of_slices": 1,
  "date_created": "2017-12-15T17:50:36.042800Z",
  ...
}
  • Meta Result entries are generated by running a Meta Analysis. Metadata will be compiled into a dataframe object, where metadata are grouped into lists:
{
  "info": {
    "dataframe": {
      "instrument_type": [
        "RigolDS1054Z",
        "RigolDS1054Z",
        ...
      ],
      "slice_length": [
        6000,
        5000,
        ...
      ],
      "date_created": [
        "2017-12-15T17:50:36.042800Z",
        "2017-12-15T12:30:26.000000Z",
        ...
      ],
      ...
    }
  }
}
With the Result entities, you can visualize and perform measurements in the timeseries data, such as looking forpatterns in data, measuring rise/fall times, and pass/fail criteria.

With the Meta Results entries, you can look for explanations as to how the presence of these patterns, or the rise/fall times, or the configuration information affects whether the result passed or failed by some other criteria. For example, it might be that the presence of a peak at a specific location in the timeseries is correlated with failure, or that failed results are more likely to come from a specific test rig.

In addition to collecting data from GradientOne-integrated testrigs, you can upload data through the GradientOne web interface. To access it, go to /uploads. We support data in JSON, xls, xlsx and csv format. In JSON, the data must be either in the info/channels format, either as a single object or in an array of multiple objects. 

After adding a supported file, the page will attempt to interpret the data:
Picture
By default, the file will be iterpreted as being a table where every row is a the metadata of a single entry. In this example, Result 1 will be uploaded as:
"info": {
    "Average Cell Current (mA)": 4.8,
    "Average Cell Gain (dB)": 3.1,
    "Result": "Pass",
    "Max Cell Temperature (C)": 0.2,
    "Average Cell Temperature (C)": 1.6
},
If you instead select Rows are channels of a single result, the data will be uploaded as a single result, where Channel 1 is row 1:
{
  "info": {
    "channels": [
      {
        "name": 0,
        "y_values": [
          1.4,
          5.1,
          0.2,
          ...
        ]
      },
      {
        "name": 1,
        "y_values": [
          1.4,
          4.9,
          0.2,
          ...
        ]
      },
      ...
    ]
  }
}
If you instead select Rows are single channels of multiple results, the data will be uploaded as multiple results, where Result 1 is row 1:
{
  "info": {
    "channels": [
      {
        "name": 0,
        "y_values": [
          1.4,
          5.1,
          0.2,
          ...
        ]
      }
    ]
  }
}
Result 2 will be: 
  "info": {
    "channels": [
      {
        "name": 0,
        "y_values": [
          1.4,
          4.9,
          0.2,
          ...
        ]
      }
    ]
  }
}
If you select any of the Columns options, the columns will be painted the same instead of the rows:
Picture
If you select Columns are result metadata entries, Result 1 will be:
{
    "info": {
        0: 1.4,
        1: 1.4,
        2: 1,3,
        ...
    }
}
If you select Columns are channels of a single result, only one result will be created:
{
    "info": {
        "channels": [
            {"name": "Number", 
             "y_values": [0, 1, 2, ...]},
            {"name": "Average Cell Temperature", 
             "y_values": [1.4, 1.4, 1.3, ...]},
            ...
        ]
    }
}
If you select Columns are the single channels of multiple results, then Result 1 will be:
{
    "info": {
        "channels": [
            {"name": "Number", 
             "y_values": [0, 1, 2, ...]},
        ]
    }
}
and Result 2 will be:
{
    "info": {
        "channels": [
            {"name": "Average Cell Temperature", 
             "y_values": [1.4, 1.4, 1.3, ...]}
        ]
    }
}
Make sure that you add a config name so that you can find your uploaded results later. Click Submit when ready. After uploading, links to the generated results will appear in the Link column.
Picture
0 Comments

Measuring the Effect of Factors with Principal Component Analysis

12/20/2017

0 Comments

 
​Principal Component Analysis is a way of determining how important different factors are on a dependent variable, and whether they are positively or negatively correlated. This article will explain how to do PCA on GradientOne's platform and go through interpreting the results.

A manufacturer of LED TVs wants to investigate the causes of pixel failure in their TVs. If there are no dead pixels in a screen, the screen is tagged as a Pass. If a single pixel on a screen is dead, it is tagged as a Warning, because most people won't notice a single dead pixel. If more than one pixel on the screen is dead, it is tagged as a Fail.

During the testing phase, the gain (meaning how much current is produced for the incident light intensity), current, and the temperature for each pixel is measured. These results were generated by running a configuration called "pixel". The average gain, average current, average temperature and max temperature are stored in the metadata section of each result:
"info": {
    "Average Cell Current (mA)": 4.8,
    "Average Cell Gain (dB)": 3.1,
    "Result": "Pass",
    "Max Cell Temperature (C)": 0.2,
    "Average Cell Temperature (C)": 1.6
},
There are multiple factors here (Average Cell Current, Gain, Temperature) that lead to a single result (Pass/Fail/Warn). Some or all of these factors may contribute with different strengths, and some might be negatively correlated with the result, i.e., maybe a high max temperature but low average temperature lead to failures. A pretty good first guess would be to say that all factors contribute equally, i.e.:

Result = Average Cell Current + Average Cell Gain + Max Cell Temperature + Average Cell Temperature

If we were draw the histogram of this calculated "Result", it would look like this:
Picture
​The X coordinate here is our sum result, and the y coordinate is the number of results with that sum. This histogram shows that our first guess was okay - the Passes are lumped to the left and the Failures to the right. There is some underlying signal in our data, but it would be hard to draw a dividing line between where the warnings end and the failures begin. Principal Component Analysis allows us to optimize the coefficients on the factors in the equation above such that we can easily separate the three results. Generating the same histogram for our PCA-optimized equation results in:
Picture
​In this histogram, there is less overlap between Pass, Fail and Warn, and the Pass results are even further from the Fail results. PCA will generate as many of these equations as it has factors. Instead of histograms, GradientOne chooses to present PCA results as a scatterplot of the first component vs the second so that we can take advantage of two dimensions to show even further separation between clusters.
​
To run PCA, click on Principal Component Analysis on the Analysis page.
Picture
​On the modal, enter pixel in the Select Data box, and wait for the modal to show showing results for "pixel". Then click on Select All:
Picture
​You will need to provide additional input: the dependent variable. In this case, the dependent variable is the pass/fail Result:
Picture
​Then scroll all the way down the modal to the run button. The gears will turn as the metadata is compiled, and a link to the meta result will appear after the command is complete:
Picture
​Principal Component Analysis generates a list of transforming equations that attempt to explain the spread of values in a set. PCA will generate as many component equations as there are input factors. In this case, there are 4 factors (Current, Gain, Max Temperature and Average Temperature), so four equations (pca1 through pca4) are generated. Equations are sorted by how much of the spread they explain. The amount of spread explained is posted below the plot:
Picture
​The largest factor in pca1 is the Average Cell Temperature, at 0.864. The next largest factors are about equal: Max Cell Temperature and Average Cell Current. The factor on Average Cell Gain is small and negative. A reasonable interpretation is then that the Average Cell Temperature can explain most of the warnings an failures, that Average Cell Current and Max Cell Temperature also have some explanatory power, and that Average Cell Gain is not important. The chart above the equations shows the data with the plotting axes transformed to pca1 and pca2:
Picture
The important things to note are that:
  1. the data is roughly vertically aligned (there is no tilt upwards or downwards in the data) meaning that there is no other factor that hasn't been accounted for pca1 or pca2
  2. you could imagine drawing two vertical lines that would nicely partition the passes from the warnings and failures, meaning that pca1 is able to explain most of the differences between passes, warnings and failures.
  3. failures are on the positive side of the graph, meaning that larger Average Cell Temperatures result in failures.
0 Comments

Investigating Failures with Categorical Metadata

12/11/2017

0 Comments

 
GradientOne provides several tools for performing meta-analyses of results. As devices become increasingly complex, the point or points of failure can be harder to identify. Incorporating data mining techniques such as machine learning or exploration into the process of brining new products to market can help at multiple stages in the pipeline. It can help in the research and development phase by predicting the possible avenues of implementation most likely to be fruitful. It can help in the manufacturing stage by identifying faulty equipment before the final steps in the process, thus saving operation costs and time, and it can help in the market stage by identifying devices that need to be recalled or patched before failures are reported. This first post will go over how to visualize and explore categorical data with this hypothetical example.
​

A robotics company wants to investigate the cause of failures. They have iterated the design of their robots over the years, such that although each robot is running the same software and has the same chassis, the components may have changed. For example, some robots have Nylon Supportsinstead of aluminum ones, some robots have a Copper Heatsink on the CPU, some robots have Mecanum Wheels instead of regular ones, and some robots have Lithium Batteries instead of Ni-Cad batteries. The robotics company has a record of all of the components that went into each of their robots, and uploads each part manifest as a result, where a 0 means that that part is not in the robot, and 1 means that part is in the robot:
"info": {
    "Nylon Supports": 1,
    "Copper Heatsink": 0,
    "Result": "Pass",
    "Lithium Battery": 0,
    "Mecanum Wheels": 1
},

Since this data is categorical, and not numerical, looking at the scatterplots does not yield useful results:
Picture
Instead, the Decision Tree meta analysis is the best tool for categorical data. Since the Decision Tree analysis is a supervised learning analysis, whereas the Scatterplot Matrix was an unsupervised learning analysis, the test engineer will need to provide additional input: the dependent variable. In this case, the dependent variable is the pass/fail Result:
Picture
View Results takes you to the Decision Tree. Two trees are generated: the Optimal Tree, that separates the most passes from failures using the fewest decision points:
Picture
From this tree, we can see that the combination of Mecanum wheels and a copper heatsink or a lithium battery results will likely result in a failure, and that the combination of regular wheels, alumnium supports, and a copper heatsink also leads to failures. Scrolling down to the partition plot, we can see that this tree has a good partition of passes and failures: there are relatively few passes in a section with a lot of failures, and vice-versa:
Picture
The other tree, which can be seen by changing the value in the value in the dropdown. The Symmetric Tree expands all the parameters in the same order. This can be used for data exploration:
Picture
The partition plot for the symmetric graph has more partitions than the optimal tree, but still has a good separation between the passes and failures. From this graph, we can see that the mecanum wheels are the primary cause of failures, though it's harder to tell how the other components contribute:
Picture
0 Comments

Investigating Failures with Numerical Metadata

12/11/2017

0 Comments

 
GradientOne provides several tools for performing meta-analyses of results. As devices become increasingly complex, the point or points of failure can be harder to identify. Incorporating data mining techniques such as machine learning or exploration into the process of brining new products to market can help at multiple stages in the pipeline. It can help in the research and development phase by predicting the possible avenues of implementation most likely to be fruitful. It can help in the manufacturing stage by identifying faulty equipment before the final steps in the process, thus saving operation costs and time, and it can help in the market stage by identifying devices that need to be recalled or patched before failures are reported. This first post will go over how to visualize and explore numerical data with this hypothetical example.

A manufacturer of LED TVs wants to investigate the causes of pixel failure in their TVs. If there are no dead pixels in a screen, the screen is tagged as a Pass. If a single pixel on a screen is dead, it is tagged as a Warning, because most people won't notice a single dead pixel. If more than one pixel on the screen is dead, it is tagged as a Fail.
​
During the testing phase, the gain (meaning how much current is produced for the incident light intensity), current, and the temperature for each pixel is measured. These results were generated by running a configuration called "pixel". The average gain, average current, average temperature and max temperature are stored in the metadata section of each result:
"info": { 
    "Average Cell Current (mA)": 4.8, 
    "Average Cell Gain (dB)": 3.1, 
    "Result": "Pass", 
    "Max Cell Temperature (C)": 0.2, 
    "Average Cell Temperature (C)": 1.6 
},

Since this data is numerical, the first thing a test engineer can do to investigate the cause of failures is to look at the scatterplots. On the Analysis page, the test engineer would click the checkbox next to Scatterplot Matrix, under Meta Analysis Suites and then on Run Selected
Picture
On the modal, enter pixel in the Select Data box, and wait for the modal to show showing results for "pixel". Then click on Select All
Picture
Then scroll all the way down the modal to the run button:
Picture
The gears will turn as the metadata is compiled, and a link to the meta result will appear after the command is complete:
Picture
The View Results link will take you to the scatterplot. The factors that appear on the x and y axis can be changed using the drop-downs to the right of the chart:
Picture
By selecting the gain and current values as axes, we can see a cluster of passes around the high-gain low-current corner of the plot, but no clear cluster distinction between the warnings and failures:
Picture
However, when the axes are changed to average an max temperature, all three clusters become apparent:
Picture
From this, the test engineer can conclude that although there are correlations between gain, current, and passing screens, the real cause is likely to be the temperature.
0 Comments
<<Previous

    Archives

    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    September 2017
    August 2017
    July 2017
    June 2017
    April 2017
    July 2016

    Categories

    All
    Instrument
    Usability

    RSS Feed


​Home
News
Jobs
​
Privacy Policy
Contact Us
OUR MISSION - 
GradientOne’s mission is to improve the work of engineers,manufacturing organizations,  technical support teams, and scientists, by using cloud computing to streamline instrument management, data collection, analysis, reporting and search.
        ©2019 GradientOne Inc.  All Rights Reserved.
  • Home
  • Why GradientOne?
    • How It Works
    • Product Features
  • Solutions
    • Test Engineering
    • Compliance Labs
  • Resources
    • API
    • Blog
    • News
    • Documentation
  • Free Trial Signup