Facebook, as a company, has such a bad public image that employees reportedly suffer criticism from family, relatives, and friends. That in itself should say a lot about how the company is handling the numerous issues it’s facing these days, and what seems like a continuous stream of scandals over the last year. It looks like the company apparently knows this, and in an attempt to help its employees deflect criticism from their relatives and friends over the holidays, it reportedly built an AI-powered chatbot.

As reported by The New York Times, the company has been testing the chatbot, called “Liam Bot” since spring of this year, and rolled it out to employees shortly before Thanksgiving. The chatbot is aimed at helping its employees handle tough questions from relatives, family, and friends about the company, it’s policies, and all the criticism that has come its way in the wake of the Cambridge Analytica scandal, and all the other scandals since then.

The New York Times also reports that the answers given by the chatbot are written by the company’s PR team, and, for the most part, appear to align with the company’s public statements on difficult topics such as hate speech, election meddling, and more. As an example, NYT mentions that asking the chatbot a question about hate speech gets a response like “It [Facebook] has hired more moderators to police its content” or “Regulation is important for addressing the issue.”

According to a statement from a Facebook spokesperson given to the New York Times, “Our employees regularly ask for information to use with friends and family on topics that have been in the news, especially around the holidays. […] We put this into a chatbot, which we began testing this spring.”