Today's announcements by Google have certainly given us a lot to look at in terms of new hardware and features – and possibly a case of sticker shock. But while the show was mostly dominated by new gadgets and demos of Google assistant, there was a really important addition for developers (and ultimately users) at the tail end of the event. Google intends to turn assistant into a major ecosystem for apps and services by opening up the platform to developers.

The platform is called "Actions on Google" and it will allow developers to deliver custom experiences through Google assistant. Google assistant can already take advantage of many existing capabilities like app indexing, deep linking, and even the Voice Interaction API to provide helpful answers and services. However, these techniques put a wall between apps and the voice interaction that would call them, leaving a very limited form of interaction. By opening assistant to developers, it will be possible to create very streamlined interactions.

Developers will be able to create two types of actions: direct and conversational. Direct actions can be triggered by Google assistant when a command is clear and doesn't require any subsequent follow-up. These are very similar to the voice actions Google already supports with a number of existing partners. For example, this could include a command to play a specific song from a music app or turn on the light in front of your house.

Conversation actions are used for things that require more continuous Q&A with users. This can turn into a dialog where assistant asks the questions necessary to fulfill a request and based on a user response, additional questions may follow. Examples of these interactions could include asking for a car to pick you up, then responding to questions about where it should stop, how large the vehicle should be, and more.

Google wants these actions to be accessible without necessarily installing an app or authorizing a service in advance. As it was described on stage, Google assistant will detect what is needed based on the context of the request and attempt to provide it. There are no details about how this will be achieved, but it sounds like developers will register their actions in a discovery service.

Google recently acquired API.AI, a service that helps developers create conversational user interfaces. These can be turned into actions for Google assistant, and similar tools and services will be supported in the future.

According to the presentation, the Actions on Google platform will be open and allow anyone to build for it. The intended launch date is expected to be sometime in early December, though there are no details beyond that. Visit developers.google.com/actions to sign up for news and updates.

In addition to opening up the platform to allow developers to work within it, Google is also working to make it available to more hardware. The Embedded Google assistant SDK was also announced with the intent to encourage hardware makers to include Google assistant in anything from major product lines all the way down to personal projects built on Raspberry Pi. In other words, we can expect to see Google assistant running on plenty of hardware beyond just the Pixel phones, Google Home, and a short list of other partner devices – but the terms might be restrictive. The embedded SDK is expected to launch next year, but that's about the extent of the details.

This will be Google's first fully open hook into its interactive voice capabilities. Amazon already allows developers to integrate with Alexa's conversational flow, although it is fairly limited, and Apple released SiriKit earlier this year to allow iOS and macOS developers to integrate with its own voice assistant.