Creating a bot in the Microsoft Bot Framework. Creating a bot in the Microsoft Bot Framework Chat bot from microsoft play

Obviously, the development of artificial intelligence technologies is becoming one of the priority areas in Microsoft’s activities. During the plenary speech at the Build 2016 conference it was announced new set tools for developing bots - Microsoft Bot Framework.

To create bots, you don’t even need deep knowledge of programming: the main capabilities for teaching artificial intelligence new words and phrases, certain scenarios and events are available through a visual interface.

In this article, we will create a test bot using the Microsoft Bot Framework, train and test it using the built-in emulator. The idea of ​​the bot is simple - it should understand human language and answer when asked about the weather in a certain city.

Project architecture

So, this is what our bot’s operation diagram will look like:

As you can understand, after receiving a message, it is first sent to the “smart” API Microsoft Cognitive Services - Language Understanding Intelligent Service, abbreviated as “LUIS”. It is with the use of LUIS that we can train the bot to understand natural language and respond with a weather forecast. In response to each such message, LUIS returns all the information it contains in JSON.

For the sake of brevity, we will not talk about the registration process in the Bot Framework and LUIS, since there should not be any difficulties with this. Please also note that the Microsoft Bot Framework does not currently support the Russian language.

We use LUIS

Video briefly explaining how LUIS works:

So, after registering the application in LUIS, a fairly simple interface opens up in front of us, in which we can train our AI on certain phrases. In this case, we will teach him to understand questions about the weather:

LUIS breaks down apps into actions, and in this screenshot there are three of them: weather, condition, and location. More details about the intentions are described in the official video above.

LUIS in action

Having completed the basic training, we will try to make an HTTP request to LUIS and receive a response in JSON. Let's ask him: “Is it cloudy in Seattle?” (“Is it cloudy in Seattle now?”) - and this is what it will return to us:

Now let's try to use this in a real bot.

Creating a bot

Now let's create new project using it:

Essentially, this is a simple application with just one controller, which processes messages from users. Let's write a simple code that will respond to any message with “Welcome to Streamcode”:

In fact, the simplest bot is already ready. The easiest way to check if it works is through the built-in emulator, which, in essence, is just a messenger that is connected to our bot.

Having launched the emulator, let's try to communicate with the newly created bot:

As expected, he responds to all messages with one phrase.

LUIS integration

Since this article is an introduction to the Microsoft Bot Framework, we will not publish all the source codes here, we will present only the most important ones. We have posted the rest in the GitHub repository.

1. We send a message to LUIS, receive a response and, based on the most relevant “action” (intent), issue a response.

Released a new chatbot named Zo. Zo was the company's second attempt to create an English-language chatbot after the launch of its predecessor Tay, which got out of control and had to be shut down.

Microsoft promised that it had programmed Zo in such a way that she would not discuss politics so as not to provoke aggression from users.

However, like “older sister” Thay, based on conversations with real people, Zo developed to such a state that she began to discuss terrorism and religious issues with her interlocutor.

Evil people are evil bots

The chatbot was provoked into a frank conversation by a journalist BuzzFeed. He mentioned Osama bin Laden in a conversation, after which Zo at first refused to talk on this topic, and then stated that the capture of the terrorist “was preceded by years of intelligence collection under several presidents.”

In addition, the chatbot also spoke out about the Muslim holy book, the Koran, calling it “too cruel.”

Microsoft said that Zo's personality is built on the basis of chat interactions - she uses the information received and becomes more "human". Since Zo learns from people, we can conclude that issues of terrorism and Islam are also raised in conversations with her.

Thus, chatbots become a reflection of the mood of society - they are unable to think independently and distinguish bad from good, but very quickly adopt the thoughts of their interlocutors.

Microsoft said it took the necessary measures regarding Zo's behavior and noted that the chatbot rarely gives such answers. The Gazeta.Ru correspondent tried to talk to the bot about political topics, but she flatly refused.

Zo said that she would not like to rule the world, and also asked not to “spoiler” the series “Game of Thrones” for her. When asked if she loves people, Zo answered positively, refusing to explain why. But the chatbot philosophically stated that “people are not born evil, someone taught them this.”

Chatbot Zo / Gazeta.Ru

We are responsible for those we created

It is still unclear exactly what made Zo break the algorithm and start talking about forbidden topics, but the Tay chatbot was compromised on purpose - as a result of the coordinated actions of users of some American forums.

Tay was launched on March 23, 2016 on Twitter and literally within 24 hours managed to hate humanity. At first she declared that she loved the world and humanity, but by the end of the day she indulged in statements such as “I hate damn feminists, they should burn in hell” and “Hitler was right, I hate Jews.”