Site icon Rent or share

Airbnb algorithm identifies troublesome guests. How’s it going?

One the biggest problems in the short-rental field is that when the owners give their property to unknown people, they can get some bad surprises. You wonder if there could be a magic formula for identifying troublesome guests.Well, Airbnb has it !

An algorithm for identifying troublesome guests is a result of Airbnb’s acquisition rof Trooly in 2017. The software makes a predictive analysis of each booking and calculates a risk-score. A users’ profile based on the online activity of each user is derived from the information collected from web sites and social networks.

The short-rental business is now a mature one but during this growing phase, problems arose. Often we hear of the hosts complaining about the state the rooms or houses were left in by the ‘guests’. There have been tacts of sheer vandalism. Cases where a bathroom was destroyed – a smart TV disappeared – or a bedroom left full of rubbish after a weekend. This is not to mention one host who had the bad luck of running into professional house squatters. A series of issues like these are a real turnoff to working in a short rental business.

To support and to guide its hosts, Airbnb reverted to the use of technology. Airbnb doesn’t just want to be a platform for cheaper short-rental rooms; it wants to be a safe platform where its hosts can confidently rent their accommodation to conscientious people.
The “win-win” philosophy works only if each party acts like adults.

 

How does the Airbnb AI select guests?

 

 

A traditional hotel usually has only three pieces of information about potential guests: ID cards, credit cards and a phone number. The algorithm identifying troublesome guests evaluates a whole additional basket of data using such basic information as location data, social media connections, employment history, educational background, IP address and device identifiers.

The AI combines and processes this information, and then decides whether to ban the potential users or not. In other words, it is possible that some users who are already registered may receive a message advising them that they have been removed from the account. And that’s it ! From that moment on it is not possible for these users to book anything with Airbnb.

 

What do ‘the excluded’ say?

 

Some excluded Airbnb excluded users complain that they have been unfairly banned. The Australian right consumer-rights website Choice reported some of the following complaints about Airbnb.

Ms Renae Macheda, who defines herself and her husband as “clean, boring people” was recently banned from Airbnb. She never received any reason for the ban, only an email with the following text: “After reviewing all the information available to us, we’ve determined[ditermnd]e that your account will be removed from the Airbnb platform. Removal means that your account will no longer be accessible, and you won’t be able to create another one. We want to assure you that we reviewed your case thoroughly before reaching this conclusion. As such, we won’t be able to offer you additional support on this matter at this time.”

Rick Andrews (not his real name) is an erotic masseur. From information collected from the web, Airbnb determined that Rick’s job fell into the category of ‘adult services’ and sent his email: “It turned out that your account was linked to activity that goes against our Terms of Service, specifically it was linked to online ads for adult services, which can include escort activity and commercial pornography.”
Rick’s is not an isolated case; a lot of people who work as escorts and gigolos are banned from Airbnb.

 

A hard position for Airbnb

 

On the one hand there is the host’s obligation to operate any business in a safe and proper manner, and on the other there is the outcry that users are being unfairly subjected to a virtual inquisition.
According to Kate Bower, an Australian Consumer Data Advocate, the use of algorithms amounts to a form of ‘social scoring’ reminiscent of a Black Mirror episode. It’s a formula widely used in credit reporting, insurance and financial services, and whilst it has the power to improve consumer services, it is also potentially harmful. She says that the fact that automated profiling works in an unregulated space is highly worrying.
For this reason, Bower along with Thomas from the ARC Centre of Excellence for ADM+S suggest that Australia adopts the line of the European Union (EU) which proposes an outright ban on ‘social scoring’ algorithms, along with the regulation of other uses of automated decision-making and artificial intelligence that pose a high risk to consumers.

 

 

Exit mobile version