February Updates to Azure Stream Analytics

 We are pleased to share that the latest Azure Portal update brings several new Stream Analytics features.


Resume stopped jobs

To enable resuming stopped jobs without data loss, we added an option on Start to bring the job back up from the Last Stopped Time.

Support for Table Storage output

In addition to SQL Database, Event Hub, and Blob Storage, there is now built-in support for outputting to Azure Table Storage.


Detailed status for running jobs

Previously, any running Stream Analytics job had a top-level status of Running. In order to convey more detailed information about the health of a running job, we have added more granular states to running jobs: Processing, Idle, and Degraded.

Processing: A nonzero amount of filtered input events have been successfully consumed by the Stream Analytics job. If a job is stuck in the processing state without producing output, it is likely that the processing time window is large or the query logic is complicated.  You may also have a query that filters out all events (for example, too strict of a WHERE clause or an INNER JOIN that finds no matches).

Idle: No input data has been seen in the in the last 2 minutes. If a job is in the Idle state for a long period of time, it is likely that the Input exists but there are no events to process.  This could also occur if your output start time occurs after the timestamp of the events in your input source.

Degraded: This state indicates that a Stream Analytics job is encountering one of the following errors: Input/output communication errors, query errors, retryable runtime errors. To distinguish what type of error(s) the job is encountering, view the Operation Logs.

Job status banner

A banner now appears on the job dashboard when the job may need attention.

Nested record support

We have received a high volume of requests for support for nested records in JSON and Avro. This is now supported both for running jobs and the in-browser query testing experience. More information on nested records is available on the Data Types documentation page.

API Support for Event Hub Consumer Groups

Using REST APIs you can now specify a Consumer Group to use when reading from Event Hub. Associating a Stream Analytics job with its own Consumer Group enables your job to have its own view over the input stream, independent of other event readers. A Consumer Group is specified via the optional property consumerGroupName in the Create Input request and will be surfaced in the portal in a future update.

Comments (0)

Skip to main content