Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To learn more about Compound metrics, please see the C3.ai Developer Documentation here:

Finding, Evaluating, and Visualizing Metrics

...

After finding a metric, the next step is to evaluate on data in a C3.ai Type.

Evaluating Metrics

With a metric in mind, you can use Metrics are evaluated with either the 'evalMetrics' API function which is brought in with the MetricEvaluatable type to actually evaluate the metric. The evalMetrics function takes an 'EvalMetricsSpec' type which contains the following:

  1. list of Type ids you want the metrics to be evaluated on
  2. A list of metrics you want to be evaluated
  3. A start date (in ISO 8601 format)
  4. An end date (in ISO 8601 format)
  5. An evaluation interval

' or 'evalMetricsWithMetadata' methods. Behind the scenes, 'evalMetrics' and 'evalMetricsWithMetadata', fetch and transform raw data from a C3.ai Type into easy-to-analyze timeseries data. 'evalMetrics' is used to evaluate metrics provisioned (deployed) to a C3.ai tenant/tag. 'evalMetricsWithMetadata' allows users to evaluate metrics either provisioned to a C3.ai tenant/tag or defined on-the-fly in JavaScript console or a hosted Jupyter notebook (typically for debugging).

To learn more about the differences between 'evalMetrics' and 'evalMetricsWithMetadata' see the C3.ai Developer Documentation here: https://developer.c3.ai/docs/7.12.0/type/MetricEvaluatable

To evaluate a metric, users must provide the following parameters (called an EvalMetricSpec) to the 'evalMetrics' or 'evalMetricsWithMetadata' methods.

  1. ids ([string]): A list of ids in the C3.ai Type, on which you want to evaluate the metrics (e.g., "Germany", "California_UnitedStates")
  2. expressions ([string]): A list of metrics to evaluate (e.g., "JHU_ConfirmedCases", "Apple_DrivingMobility")
  3. start (datetime): Start datetime of the time range to be evaluated (in ISO 8601 format) (e.g., "2020-01-01")
  4. end (datetime): End datetime of the time range to be evaluated (in ISO 8601 format) (e.g., "2020-08-01")
  5. interval (string): Desired interval for the resulting timeseries data (e.g., MINUTE, HOUR, DAY, MONTH, YEAR)

Here's an example of evaluating a metric in PythonSuch an evaluation in Python might look like this:

Code Block
languagepy
spec = c3.EvalMetricsSpec({
	'ids': [ 'A', 'B', 'C' ],
	'expressions': [ 'SampleMetric', 'SampleMetric2' ],
	'start': '2019-01-01',
	'end': '2019-05-01',
	'interval': 'DAY',
})

results = c3.SampleType.evalMetrics(spec=spec)

The results are in the form of a C3 AI Suite returns the evaluated metric results (a timeseries) into the 'EvalMetricsResult' . By itself, this type isn't easily usable, however C3 offers the type 'Dataset' which is better suited for data analysis.
We can then convert the EvalMetricsResult to a Dataset using a convenient helper function and then in the case of Python to a pandas DataFrame using another
helper functiontype. With various helper functions, C3.ai developers may then convert this timeseries into a Pandas DataFrame (via "Dataset" type) for further data analysis or model development in a Jupyter notebook, as shown below.

Code Block
languagepy
ds = c3.Dataset.fromEvalMetricsResult(result=results)
df = c3.Dataset.toPandas(dataset=ds)

If you're Additionally, users can visualize evaluated metric results directly in the browser using Javascript, you can utilize web-browser (i.e., JavaScript console) with the 'c3Viz' console function to display the result of eval metrics. The whole evaluation might look like this:function.

Here's an example of evaluating and visualizing in JavaScript console.

Code Block
languagejs
var spec = EvalMetricsSpec(
	ids= ['A', 'B', 'C' ],
	expressions= [ 'SampleMetric', 'SampleMetric2' ],
	start= '2019-01-01',
	end= '2019-05-01',
	endinterval= '2019-05-01',
	interval= 'DAY')

var results = SampleType.evalMetrics(spec)
c3Viz(results)

...

DAY')

var results = SampleType.evalMetrics(spec)
c3Viz(results)

To learn more about evaluating and visualizing metrics, please see the C3.ai Developer Documentation here:

...

Note: Metrics can only be evaluated on C3.ai Types that mix in the 'MetricEvaluatable' Type.

Conclusion

Official C3.ai Developer Documentation

Conclusion

To get started quickly, focus on 'CompoundMetrics'. They're the easiest to use, and for most cases, the 'AVG' treatment works well.

Official C3 documentation:

Review and Next Steps

For In most data exploration, you'll find yourself 'Fetching' and running 'evalMetrics'. This guide provides a good foundation for these activities. This first set of activities might be described as 'Read-Only'. Here you're using the C3 AI Suite as simply a readable database and API. The next set of things to learn would be 'Write' type operations. How do you define new types? How do you 'persist' new instances of a type? How do you clean the databases in your tag up? And so on. These will be the subject of a planned DTI Guideanalysis, C3.ai developers run the 'fetch' and 'evalMetrics' methods. This C3.ai DTI Quickstart guide provides an introduction to these methods, in which the C3 AI Suite is used as a read-only database, accessed via APIs. In the following guides, you will learn how to run 'write' operations on the C3 AI Suite such as:

  • Defining new types
  • Loading new data
  • Clean-up databases in your tag
  • Train machine learning models
  • And so on..

Welcome to the start of your experience with the C3 AI Suite.