Skip to content

Releases: jovotech/jovo-framework

v2.0

09 Jan 08:39
d24347a
Compare
Choose a tag to compare

🎉 Jovo Framework v2is now available 🎉

Learn more in the announcement here: https://medium.com/@einkoenig/introducing-jovo-framework-v2-c98326ac4aca

v1.4

03 Aug 08:53
Compare
Choose a tag to compare

🎉 Jovo Framework v1.4 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

This release comes with new features for the Jovo Framework, the Jovo CLI, and the Jovo Debugger. To make use of all, update to the current version of the Jovo CLI with $ npm install -g jovo-cli and the Jovo Framework with $ npm install jovo-framework --save.

Learn more in the announcement here: https://www.jovo.tech/blog/jovo-framework-v1-4/

Release Notes

  • #204 Added Alexa Gadgets API - @omenocal
  • Added Alexa 'Request Customer Contact Information' Card+API call
  • Added language model tester
  • #205 Fixed setFullscreenImage in BodyTemplate6 - @vitaliidanchul
  • Added setAccessToken to GoogleAction request classes
  • #190 Added CanFulfillIntentRequest - @omenocal

v1.3

22 Jun 13:22
Compare
Choose a tag to compare

🎉 Jovo Framework v1.3 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

This release comes with a big new update: The Jovo Debugger (beta) is a plugin that allows you to visually debug your voice app in the browser, even while talking to the device. Update to the current version of the Jovo Framework with $ npm install jovo-framework --save, use the $ jovo run command and copy your webhook link to the browser!

Screenshot of the Jovo Debugger

Release Notes

v1.2

17 May 16:35
7570ab6
Compare
Choose a tag to compare

🎉 Jovo Framework v1.2 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Read the full announcement here: Jovo Framework v1.2: In-Skill Purchases, Staging, Plugins, and more.

Release Notes

  • #153: Added Voice App Unit Testing (Beta) - @Milksnatcher.
  • #150: Added user context functionality - @KaanKC.
  • #144: Added In-Skill Purchases for Alexa Skills - @aswetlow.
  • #151: Added support to SpeechBuilder - @fgnass.
  • #136: Added app.json config support - @aswetlow.
  • Fixed getPermissionToken() bug (issue #152)
  • Fixed nodemon dependecy bug (issue #112)
  • Fixed encoding in Google Action Dialogflow V2 requests

v1.1

29 Mar 20:15
43a83a4
Compare
Choose a tag to compare

🎉 Jovo Framework v1.1 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Release Notes

  • #96: Added Chatbase integration - @KaanKC.
  • #104: Added Azure Functions support - @trevorwang.
  • #106: Added aliases for Speechbuilder's 'say-as' methods - @FlorianHollandt.
  • #99: Added gcloud datastore integration - @mehl.
  • 502c141: Added MediaResponses (AudioPlayer) and CarouselBrowse to Google Actions
  • 4d6308b: Added emits to public methods
  • f2fb9c2: Added plugin functionality
  • Fixed error in handleElementSelectRequest function (see issue 91)
  • Added Skill/List events for Alexa Skills (see issue 23)
  • Added more unit tests
  • Added this.reprompt SpeechBuilder instance
  • Fixed Dialogflow session context bug
  • cc5a888: Added app.onRequest() and app.onResponse() listeners (see issue #85)
  • 2ab4fdfsp: Fixed XML escaping in i18n
  • 375b3dc: Added support for Dialogflow API v2

v1.0

15 Feb 20:40
dfd1387
Compare
Choose a tag to compare

🎉 Jovo Framework v1.0 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Take a look at the announcement here and read our migration guide.

Release Notes

  • New Project Structure
  • Jovo Language Model
  • Powerful New CLI Features
  • Jovo Webhook
  • A New Way to Access Input

New Project Structure

Jovo Folder Structure

A Jovo voice app is divided into three main building blocks: index.js (server configuration), /app, (app configuration and logic), and models (Jovo Language Model).

Jovo Language Model

The Jovo Language Model offers a new way for you to maintain a local, consolidated language model that can be exported to both Alexa and Dialogflow with the new Jovo CLI Features.

Powerful New CLI Features

Create and Deploy a New Project

The new Jovo CLI (see docs here) offers tools that let you create platform specific language model files, and deploy them to the voice platforms Alexa and Dialogflow/Google Assistant.

Jovo Webhook

The Jovo Webhook is a new, free service that allows you to easily create an endpoint to your local development service for faster testing and debugging.

A New Way to Access Input

Learn more here.

Each input is now an object which looks like this:

{
  name: 'inputName',
  value: 'inputValue',
  key: 'mappedInputValue', // may differ from value if synonyms are used in language model
}

For example, if we want to access the value of an input name provided by the user, we can do so by using name.value.

Other parameters (like id or platform specific elements) can be found in the object as well.

v0.6

20 Sep 10:01
Compare
Choose a tag to compare

🎉 Jovo Framework v0.6 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Release Notes

  • New Google Assistant Features
    • Suggestion Chips
    • List
    • Carousel
    • BasicCard
  • Easier App Configuration
  • Multi Platform Handlers
  • Refactoring and Bugfixes

New Google Assistant Features

We included more features for Google Assistant mobile app so you can build voice apps that work with additional visual content when your users are inside the Google Assistant mobile app.

You can take a look at everything in this example file.

BasicCard

The Google Assistant Basic Card offers the ability to display not only a title, subtitle, image, and body content, but also a button with a link to guide users to more information. You can find the official reference by Google here: Assistant Responses > Basic Card.

Access the Google Action Basic Card class like this:

const BasicCard = require(jovo-framework).GoogleAction.BasicCard;

In your app logic, you can use it like in this example:

'BasicCardIntent': function() {
    let basicCard = new BasicCard()
        .setTitle('Title')
        // Image is required if there is no formatted text
        .setImage('https://via.placeholder.com/720x480',
            'accessibilityText')
        // Formatted text is required if there is no image
        .setFormattedText('Formatted Text')
        .addButton('Learn more', 'https://www.jovo.tech (https://www.jovo.tech/)');

    app.googleAction().showBasicCard(basicCard);
    app.tell('This card is so basic.');
}

Suggestion Chips

Suggestion chips are small buttons that show up on screen surfaces with Google Assistant to help guide users through the conversation flow. You can find the official reference by Google here: Assistant Responses > Suggestion Chip.

Suggestion Chips can be added to an output with an ask call. The session needs to stay open, as the tap on a suggestion chip is treated the same as speech input after a prompt.

You can add suggestion chips like this:

app.googleAction().showSuggestionChips(['Yes', 'No']);
app.ask('Please answer with yes or no', 'Yes or no please.');

Please make sure that the text on your suggestion chips is part of the sample phrases of the intents you want to route your users to.

List

Lists offer a way to display information vertically in an organized way. You can find the official reference by Google here: Assistant Responses > List Selector.

You need to use both the List and the OptionItem class:

const List = require('jovo-framework').GoogleAction.List;
const OptionItem = require('jovo-framework').GoogleAction.OptionItem;

After creating a new list, you can add items to it with the addItem method. Po

'ListIntent': function() {
    let list = new List();

    list.setTitle('Simple selectable List');

    list.addItem(
        (new OptionItem())
            .setTitle('Item 1')
            .setDescription('This is the first item')
            .setImage('https://via.placeholder.com/720x480', 
                'accessibilityText')
            .setKey('Listitem1key')
    );
    list.addItem(
        (new OptionItem())
            .setTitle('Item 2')
            .setDescription('This is the second item')
            .setKey('Listitem2key')
    );
    app.googleAction().showList(list);
    app.ask('Choose from list', 'Choose from list');
},

This list is now displayed in the Google Assistant mobile app.

Use ON_ELEMENT_SELECTED in your handlers to make use of the list elements once they're clicked:

'ON_ELEMENT_SELECTED': function() {
    let selectedElement = app.getSelectedElementId();
    app.tell('You selected ' + selectedElement);
},

Carousel

The carousel in Google Assistant offers a horizontal list of elements with horizontal scroll. Learn more about it here in the official Google reference: Assistant Responses > Carousel Selector.

You need to use both the Google Action Carousel and the OptionItem class:

const Carousel = require(jovo-framework).GoogleAction.Carousel;
const OptionItem = require('jovo-framework').GoogleAction.OptionItem;

Similar to the list, a OptionItem elements are added to the carousel with the `addItem´ method. Here is an example:

'CarouselIntent': function() {
    let carousel = new Carousel();

    carousel.addItem(
        (new OptionItem())
            .setTitle('Item 1')
            .setDescription('This is the first item')
            .setImage('https://via.placeholder.com/720x480',  'accessibilityText')
            .setKey('Carouselitem1key')
    );
    carousel.addItem(
        (new OptionItem())
            .setTitle('Item 2')
            .setDescription('This is the second item')
            .setImage('https://via.placeholder.com/720x480', 'accessibilityText')
            .setKey('Carouselitem2key')
    );
    app.googleAction().showCarousel(carousel);

    app.ask('Choose from carousel', 'Choose from carousel');
}

Use ON_ELEMENT_SELECTED in your handlers to make use of the list elements once they're clicked:

'ON_ELEMENT_SELECTED': function() {
    let selectedElement = app.getSelectedElementId();
    app.tell('You selected ' + selectedElement);
},

Easier App Configuration

You can now use the setConfig method to add configurations all in one place. So, for example for logging or intentMaps, you don't have to do this anymore:

app.enableRequestLogging();
app.enableResponseLogging();
let intentMap = {
    'AMAZON.HelpIntent': 'HelpIntent'
};
app.setIntentMap(intentMap);

With the setConfig method, it would look like this:

app.setConfig({
    requestLogging: true,
    responseLogging: true,
    intentMap: {
        'AMAZON.HelpIntent': 'HelpIntent'
    }
});

You can find all the potential configurations (and their default values) in this example file.

Multi Platform Handlers

Ever wanted to have some intents differ for the platforms, and don't want to work yourself through endless if-else statements? You can now define handlers for different platforms, like this:

// listen for post requests
webhook.post('/webhook', function(req, res) {
    app.handleRequest(req, res, handlers);
    app.setAlexaHandler(alexaHandlers);
    app.setGoogleActionHandler(googleActionHandlers);
    app.execute();
});

And then access them like this:

const handlers = {
    'LAUNCH': function() {
        app.toIntent('HelloWorldIntent');
    },
};

const alexaHandlers = {

    'HelloWorldIntent': function() {
        app.tell('Hello Alexa User');
    },
};

const googleActionHandlers = {

    'HelloWorldIntent': function() {
        app.tell('Hello Google User');
    },
};

Note that you only need to define the intents in your platform-specific handlers that you want to override.

Take a look at this example file.

v0.5

07 Sep 10:17
Compare
Choose a tag to compare

🎉 Jovo Framework v0.5 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Release Notes

  • Create contextual experiences with the User object
  • Refactoring
  • Bugfixes

User Object

We just made a big update to the User Class to make a few things easier while developing your voice apps:

  • Database integration
  • Metadata
  • More to come

Database Integration

The Jovo Framework now comes with easier database integration, directly tied to the User object. Previously, you had to do something like this to store user specific data:

let score = 300;
app.db().save('score', score, function(err) {
    // do something
}

Now, you can directly access and save the data in the user object. Our database integration will do the rest in the backend:

app.user().data.score = 300;

If you're already using the database class for storing data, no worries, it will still work. However, we encourage you to switch to the user object functionality, as it comes with clearer structure and better functionality, like metadata:

Metadata

The user object metadata is the first step towards building more contextual experiences with the Jovo Framework. Right now, the following data is automatically stored (by default on the FilePersistence db.json, or DynamoDB if you enable it):

  • createdAt: When the user first used your app
  • lastUsedAt: When was the last time your user interacted with your app
  • sessionsCount: How often did your user engage with your app

Here's some example code:

let userCreatedAt = app.user().metaData.createdAt; 
let userlastUsedAt = app.user().metaData.lastUsedAt; 
let userSessionsCount = app.user().metaData.sessionsCount;

More to come

As mentioned above, this is just a first step towards more contextual experiences that adapt to your users' behavior, the number of devices connected to your app, and more.

Suggestions for more interesting things to use the User Object with? Please let us know at [email protected]!

v0.4

31 Aug 10:22
Compare
Choose a tag to compare

🎉 Jovo Framework v0.4 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Release Notes

  • Echo Show Render Templates
  • Request Verifier for Webhooks
  • New Features to Access User Data
  • Improved Speech Builder

Echo Show Render Templates

Jovo now supports more visual elements for the Echo Show. With the Alexa-specific Render Templates, you can display the information in a more beautiful and organized way.

The templates can be accessed through our TemplateBuilder, like so:

let bodyTemplate1 = app.alexaSkill().templateBuilder('BodyTemplate1');

bodyTemplate1
.setTitle('BodyTemplate1 Title')
.setTextContent('Foo', 'Bar');

app .alexaSkill()
.showDisplayTemplate(bodyTemplate1)
.showDisplayHint('Foo Bar'); // optional

app.tell('Look at your Echo Show');

Each template requires its own information to be displayed correctly. We offer all Body and List Templates. You can find all of them here: Jovo Alexa renderTemplate folder on GitHub

Here is an example file to play with: indexRenderTemplate.js example file

For more information on Render Templates and styling, take a look at the official guidelines by Amazon: Alexa Display Interface Reference

Request Verifier for Webhooks

Until today, our webhook functionality worked perfectly fine for local prototyping. For their published Alexa Skills, developers usually chose to switch to the AWS Lambda settings. However, if they wanted to use an HTTPS endpoint for a Skill in production, it was be a little tedious, as you need to verify that the request is coming from the Alexa API, and from nowhere else.

With today's release and the WebhookVerified class, verifying an Alexa request is as easy as changing one line of code:

If you use this for local prototyping:

const webhook = require('jovo-framework').Webhook;

You can just switch to this when deploying your code to a web server for production:

const webhook = require('jovo-framework').WebhookVerified;

Thanks @mreinstein for the alexa-verifier-middleware that we're using for this.

New Features to access user data

Jovo now offers more features to conveniently access (and edit) user information

  • Alexa: Device Address (see example code)
  • Alexa: Lists (see example code)
  • Account Linking Card: hanks to @omenocal, you can now use account linking cards for both Amazon Alexa and Google Actions
  • getTimestamp(): Access the time of the user request
  • isNewSession(): Returns true if the request is the first in this session

Improved Speech Builder

We're constantly improving the Jovo SpeechBuilder to help you with more powerful features for output creation. This time:

  • addSentence
  • addSayAsCardinal
  • addSayAsOrdinal
  • addSayAsCharacters

What we're also excited about: You can now add a probability parameter of the type float for more variability in your responses. For example, a probability of 0.3 would add element to your response in 3 of 10 cases:

let speech = app.speechBuilder()
    .addText('This will be said all the time.')
    .addText('This only in some cases', 0.3);

Contributors

Thank you 💯

  • @omenocal, for adding platform specific features (#18) 👏

v0.3

22 Aug 11:10
Compare
Choose a tag to compare

🎉 Jovo Framework v0.3 is now available on npm: https://www.npmjs.com/package/jovo-framework 🎉

Release Notes

  • New: Alexa Audio Player Skills
  • New: i18n for Multi-Language Voice Apps
  • Improved Logging
  • Improved SpeechBuilder Features for Variability
  • Improved DynamoDB Integration
  • Bugfixes for States, Session Attributes, and more

Contributors

Thank you 💯

  • @omenocal, for helping with Session Attributes (#9), i18n support for multi-language voice applications (#10), small and large images for Alexa cards (#11), and DynamoDB improvements (#12). Wow! 👏
  • @MerryOscar, for pointing us to a bug with session attributes for Google Actions