
First Published on 21st July, 2017
Ai Editorial: Airlines need to guard themselves against data server breaches, malware or phishing programs in order to protect a loyal traveller’s login credentials and account, writes Ai’s Ritesh Gupta
Fraudsters attacking loyalty program isn’t new, but the threat is stronger than ever before.
The use of sophisticated bots is one reason why airlines and travel merchants need to be wary of the situation today. These are small applications that execute automated tasks. The fact that once a malware has infected a machine and it can be forced to turn infected machines into botnets is a serious concern. Bots are now at the forefront of triggering online fraud at large, and they can be deployed to test login credentials to take over user accounts. Considering that digital shoppers are sharing their personal details and not expect them to fill again while they transact or shop using their favourite loyalty currency, all of this needs to be guarded. Botnets are being counted upon to step up the efficacy of malicious attacks – most commonly account takeover and distributed denial-of-service attacks.
Overall, travel merchants need to guard themselves against data server breaches, malware or phishing programs in order to protect a loyal traveller’s login credentials.

Bots and loyalty fraud
The situation is precarious as fraudsters or hackers are equipped to using artificial intelligence for accessing sensitive data that bots use to serve customers, including for transactions. Such attacks mean once personal information of members is obtained in a nefarious manner and a botnet attack is unleashed to complete illegitimate transactions, for instance, air tickets. Miles are accrued and the fraudster further capitalizes on the loyalty currency for more illegal transactions. Main focus as far as redemption is concerned is on - digital gift cards, tickets and expensive merchandise that is easy to resell. Cybercriminals are adept at comprehending the configuration/ structuring of gift card numbers, and botnets are part of their plans to target gift cards. When a card is breached, they steal the stored value. As the team at Chargebacks911 points out, the actual peril of loyalty program fraud is that the damage is already done by the time airlines come to grip with the fact there had been a breach. If the breach is spotted too late, airlines can’t resell tickets. Also, one has to deal with applicable chargeback fee. What happens to loyalty currency affiliated to an account that has been redeemed? Too many complications resulting from such malicious attacks.
Vulnerable areas
Experts point out that the availability of compromised identity credentials on the dark web in big numbers is major indicator of the fact that the authentication mechanism tends to be poor or at least there is no room for archaic authentication system. For cybercriminals or fraudsters, one of their main weapons is to identify vulnerabilities. So airlines need to identify the ways in which account information can potentially be accessed, in all probability via a blend of phishing scams, identity theft, and cracking of feeble passwords. This unauthorized account takeover results in misuse of the loyalty currency.
So what kind of attacks are these and is the current fraud prevention set up enough to combat botnet attacks, signs of which could be, for instance, abnormal traffic patterns. Fraudsters or cyber criminals work out these attacks to appear like authentic traffic. One of the major issues is coming to grips with low volume, low frequency attacks. Web application firewalls struggle since this layer was devised to avert attacks against web services and not against customer identities. Web application firewalls count on IP reputation services and IP address velocity filters to identify bots. This arrangement is futile considering that botnets rotate IP addresses and have access to previously leaked user credentials.
As for controlling the same, first of all, merchants need to detect any contextual aberration in the way users generally user their respective device, or even there is a signal of deviation based on other dynamic data points such as behaviour, location, networks etc.; identify whether devices or connections have been corrupted with malware; if there is a case of unusual traffic patterns.
According to digital identify specialist, ThreatMetrix, behavioral profiling and analytics constantly record all the actions pertaining to a device, account or persona. This paves way for identification of low volume, low frequency attacks, even if they are distributed.
A rule set to check for an IP address related with numerous email accounts offer information about traffic being botnet related or not.
Other than web application firewalls and aberration from usage pattern based on other dynamic data points, travel e-commerce players also need to count on shared intelligence that is real-time and is accumulated from various industries and markets, botnet proxy detection (new generation of private botnet proxies do not appear on public proxy lists) and keep a vigil on application integrity and malware detection to monitor all devices connecting to digital assets.
Be it for processing transaction data or managing users’ profiles and accounts, data security is a critical part of any loyalty program. It is imperative for airlines to shield their loyal members – right from account creation to managing account/ accrual of miles to redemption of points. All of this shouldn’t hamper the experience at any touchpoint. The fraud prevention initiative, via behaviour analytics, device identification and tightening of data and IT infrastructure, needs to offer protection to loyal members, even if the fraudster knows their password. If I can access my loyalty program account easily, can fraudster be denied a chance to do so? Loyalty fraud security needs to evolve to match today’s threats.
Hear from experts about loyalty fraud at the upcoming 2017 APAC Loyalty Fraud Prevention Workshop, to be held in Singapore on 23rd August this year. For more, click here
Or
Attend Ai’s 6th Airline & Travel Payments Summit Asia-Pacific, to be held in Bali (29 – 31 August). For more, click here
Follow Ai on Twitter: @Ai_Connects_Us

First published on 7th April, 2017
Ai Editorial: A meticulous approach needs to be attempted to target every source of chargeback. The role of data along with human expertise needs to be optimized, writes Ai’s Ritesh Gupta
Fraud threats are constantly changing and expanding. As fraud detection technology evolves, criminals alter their tactics—what worked for them in the past might not work today.
When it comes to fraud and chargeback management, agility is one potent weapon.
Be it for counting on new technology or human expertise or ensuring earning potential isn’t being curbed, airlines, like any merchant, need to be spot on with their moves.
The consolidated figure for airline chargebacks is estimated to be $1.5 billion on annual basis. The financial consequences include dealing with fraudulent orders, expenditure incurred on fighting fraud and turning down valid orders. What’s typically the chargeback rate for an airline or a travel e-commerce entity that processes millions of card-not-present transactions on annual basis? Is it 0.8%, how can it be brought down to 0.7%? What’s the timeline and how can it improve the financial situation? This way travel e-commerce organizations not only keep the rate in control, but they are also striving to improve with pragmatic goals.
Here we assess some of the initiatives that can help in prevention of chargebacks.
Addressing the real issues: Airlines need to rely on technology, machine learning, and human forensics, or the blend of all to ensure one knows the real source of each chargeback. Otherwise one can never get to the core of the problem. The reason being a customized action, as part of a robust prevention strategy, is required to combat each chargeback source. Otherwise airlines won’t be able to target the right problem at an opportune time.
In this context, getting into details related to criminal fraud (how to cut down on unauthorized transactions that get processed?), friendly fraud (difficult to detect at time of purchase and issuers usually accept a customer’s assertion) and merchant error (it could be that even up to 40% of chargebacks could be cause by the merchant’s own mistakes, oversights, or shortcomings) is must. Also, airlines can’t only consider basic tools. At the same time, one can’t also feature every offering available for managing risk exposure. For instance, any merchant who uses Address Verification Service along with card security codes or 3D Secure is technically using multiple solutions to prevent fraud. Other options include card security codes, geo-location, device authentication, proxy piercing, biometrics etc. So airlines need to work out a meticulously constructed fraud mitigation plan.

Experts recommend that a move such as enforcing blacklists (featuring fraudsters) post an attack (rather than preventing the unauthorized transactions) isn’t an ideal move. Rather look at non-technical and API integration options, and act “faster”. Look out for the real source of each chargeback. Sort out areas like uncertain merchant error.
Coming to grips with the problem in time: The industry today is improving the acceptance rate with an integrated system for pre-authorization fraud scoring/ screening and post-authorization chargeback mitigation/ fraud recovery.
The travel industry is also relying on the efficacy of a machine learning engine that evaluates fraudulent users in real-time. Data is analyzed instantly, linking seemingly unconnected signs left behind by fraudsters. Other than detecting fraud, data is also playing its part in ensuring “genuine” orders do not get declined owing to any uncertainty around the transaction.
Alerts have emerged as a viable, faster alternative to the chargeback process. It is about how to do away with the need for a chargeback. And the key here is to stop processing of a chargeback in time. As shared by ethoca during one of our conferences, upon notification from issuers, the company transmits an alert to the merchant. For their part, airlines can refund the passenger to avoid chargeback. Alert outcome is passed on to the issuer. Result: merchant and issuer liable losses recovered by card issuer on first contact. What this also means is companies can be in better control of things to come, preventing instances of fraud in the future. And companies can also use link analysis to eradicate related fraudulent orders.
Making the most of human expertise: Artificial intelligence or AI can extract anomalies and identify patterns from real-time data but human intelligence is still needed. According to Kount, it’s not just quality of data, its accuracy or the number of datasets that only matters, but human capabilities, too, are needed to communicate, strategize, and guide machines to the optimum business result.
Working in unison: There are multiple stakeholders at risk when it comes to chargebacks.
Fraudulently filed chargebacks affect each stakeholder in the payment chain.
· For merchants, a multi-layered approach is best. Today’s solution must be agile and diverse, coupling an evolving defence with effective representment strategies.
· Acquiring banks can help reduce the effects of fraud by establishing internal blacklists and developing chargeback triggers for advanced alert notifications.
· Processors who undergo the most stringent underwriting procedures to maximize their KYC (Know Your Customer) compliance will ultimately reap the benefits through helping to ensure their merchants are following best practice methods that work alongside operational efforts to prevent friendly fraud.
· For issuers, additional due diligence is key. Despite the temptation to rapidly resolve a cardholder dispute, additional effort will pay off in the long run for those who consciously work to prevent bad habits from forming in the first place.
Are you bold enough to survive in the brave new world? Assess your preparedness at 11th Airline & Travel Payments Summit (ATPS).
Date: 03 May 2017 - 05 May 2017
Location: Berlin, Germany
For information, click here
Follow Ai on Twitter: @Ai_Connects_Us

25th November, 2020
Ekata has identified five markers of success that could also help entities unlock the potential of PSD2 SCA, be it for a provider or a merchant.
In its latest study, the company asserted that the “security enhancements provisioned under PSD2, such as SCA, quickly become more than a legal protection checkbox – they are a matter of vital strategic importance to the bottom line of any organization”.

The markers that also differentiate leaders from the rest of the pack are:
The study featured over 36 PSPs and acquirers who represent over 60% of European card-not-present (CNP) volume.
By Ai Team

First Published on 24th March, 2017
Ai Editorial: Operationalizing digital identity assessments is one initiative that every e-commerce enterprise needs to manage diligently, writes Ai’s Ritesh Gupta
Airlines, like any merchant, need to safeguard their users’ personal details saved. Imagine a situation where flyers open a personal account on an airline’s digital platform to access speedy bookings and swift flight check-ins, and at some point such data gets stolen, forces breakage in access to digital services and even results in negative publicity. This is indeed going to be a dreadful situation.
E-commerce entities require data to serve personalised offerings, but if they become a victim of a data breach then even a project like digital transformation receives a major setback. No airline can fathom breach of loyalty miles, and hacker selling account credentials to redeem the miles for tickets!
While e-commerce entities like Ryanair are looking at account personalization in a big way, this also means fraudsters can count on user identities to access personal and payment details. The reason being: use a trusted credit card saved in a valid customer account.
There is no scope for traditional ways of securing accounts or fraud prevention, for instance, savvy digital entities, focused on enrolling customer details in new ways to personalise their offerings, now consider static information being stored as a potential threat to being breached. The level of security or layers needs to be evaluated as fraudsters can hijack legitimate login sessions. Do seek a tighter measure against malware or social engineering attacks.
In fact, the threat of being breached can have detrimental impact on a bunch of airlines at one go. How? Experts don’t rule out multiple airlines systems being breached at the same time: when the user account on one airline’s system is breached, hackers will use the exact credentials to take over the same user’s account on the other airlines’ systems as users seldom differentiate their login credentials.
Bigger threat with “connected” world
Today’s intricately connected world means airlines have to work on their IT infrastructure, data management, digital interfaces etc. to ensure there is consistency in interactions. But this digital first approach also calls for stringent protection.
For instance, the Internet of Things (IoT) assumes that information and data will flow seamlessly and securely from one device or one party to another, where it can be accessed and used immediately. If the IoT keeps tracks of the items you intend to purchase, it can automatically tally the payment and process the payment as soon as it connects to the nearest payment terminal or app and verifies the customer's information and data. But wouldn’t this call for a stronger protection?
Fraudsters can work out near perfect identities from the digital detritus that digital entities and consumers are providing. As ThreatMetrix aptly states: “It is identity, not passwords or payment details, that is the cybercrime currency of 2017: near perfect, yet terrifying, simulacrums of you and I that can be used to open new accounts, hack into existing ones, and monetize fraud attacks.”
According to ThreatMetrix’s Q4 Cybercrime Report, few of the alarming trends that need to be watched out for include:
· New account originations continue to be the riskiest transactions with nearly 1 in 10 rejected.
· Considering a spate of data breaches, organizations can’t rely on static data elements. Dynamic information featuring a user’s digital identity will be critical in distinguishing “good customers from bad”.
· Fresh assaults will target collection of more details to strengthen stolen identities, rather than immediate monetization.
Attack from several quarters
Airlines need to consider the fact that one doesn’t distinguish between identities penetrated from behind a network/ firewall or via an account compromise. It is a big blow, one that, propelled by convincing identities as formulated by fraudsters, can fuel large-scale attacks.
Organized crime
This stolen data is traded by organized and networked crime networks via certain websites, apparently made accessible via specialized encryption software and browser protocols that conceal the location of cybercriminals who are part of such sites.
Recently, a cybercriminal was reportedly sentenced to 50 months in prison for identity theft. This fraudster was caught selling personal data of victims on a cybercrime platform, AlphaBay.
Definition of being safe
When we talk of digital first for a seamless, personalised experience, the safety of identity or account data to needs to be prioritized as well. Also, considering the lightening speed with which consumers expect every digital interaction to shape up, airlines need to validate customer identities without any friction.
Bot detection, ID verification, device check, cookie erasing etc. are coming into use.
Specialists assert that it is critical to evaluate every digital identity, one shaped up by dynamic, shared intelligence unearthed from a variety of sources rather a specific organization a user transacts with. Time one looks at blending static identity data with dynamic, real-time intelligence from current and historical transactions. In order to gain better results and minimize friction, specialists are counting on behavioral biometrics , analytics and a predictive model based on past behavior and transaction data to authenticate transactions. The plan is to relate user and device interactions in the present session to past user and device interactions, and look at the gamut of attributes associated with the user, device and connection.
Are you bold enough to survive in the brave new world? Assess your preparedness at 11th Airline & Travel Payments Summit (ATPS).
Date: 03 May 2017 - 05 May 2017
Location: Berlin, Germany
For information, click here
Follow Ai on Twitter: @Ai_Connects_Us
27th October, 2020
The significance of consumers feeling secure about their data, including personal information, and all other critical aspects of an individual’s association with a brand, for instance, their garnered loyalty currency, shouldn’t be undermined.
Almost 85% of consumers “are more loyal to companies that have strong security controls”, highlighted Bindu Gupta, Loyalty & Marketing Strategist, Comarch, Inc during the inaugural session of the LSA Fall Virtual Conference 2020 today. “Consumer loyalty to a brand is at a high risk. Brands cannot take loyalty for granted,” mentioned Bindu.

An attack on an entity’s data asset resulting in a breach or on a loyalty program is a big blow, more so at this juncture when teams are working remotely.
Bindu explained that loyalty is more than a rewards program. Trust is what brands must focus on and once this is established, it eventually results in more transactions (92% more likely to buy additional products and services). Also, “experience” offered too tends to make a huge difference. She also emphasized on the human element of customer experience. Three-fourth of customers tend to be interested in interacting with a human versus an automated machine. “Human connections are needed even more now,” said Bindu. She also mentioned that brands need to personalize the entire earn and rewards experience.
By Ritesh Gupta
Ai Team

4th June, 2019
Ai Editorial: 3DS 2.0 promises to combat fraudulent online transactions, but merchants need to cut down the possibility of losing out payments when authenticated using the new version of 3DS, writes Ai’s Ritesh Gupta
Transition to the new version of 3D Secure is being followed closely, owing to its impact on the shopping experience and in improving security of a transaction.
More so for high-risk transactions or in a market like Europe as the PSD2 introduces strict security requirements for the initiation and processing of electronic payments, which apply to all payment service providers. In Europe, organizations are expected to upgrade to the new version by September 2019, to be ready for the enforcement of the SCA or Strong Customer Authentication. Since this directive mandates changes in how fraud review must be conducted on intra-EU transactions, critical issues such as cart abandonment need to be evaluated in detail. The SCA aspect of the PSD2 directive can have negative impact on revenue generation, and this is what the stakeholders are concerned about.
It is being highlighted that 3-D Secure 2.0 will pave way for a real-time, protected, details-sharing channel that merchants can avail to send an unmatched number of transaction attributes that the issuer can use without looking for a static password. One of the highlights of 3DS 2.0 is data sharing. This data exchange is relatively richer owing to the combination of certified SDKs in the checkout flow, paired with data sharing APIs. Authorization rates can be stepped up with no perceivable alteration to the checkout flow.

Subject to the sort of data being provided by merchants and their respective payment services providers, the issuer is expected to act in a couple of ways to decide on the course of action related to the payment. In case, the information provided is considered to be apt to assess the authenticity of the buyer, then the particular transaction is eligible for a frictionless flow, and authentication isn’t interrupted from a shopper’s perspective. In case the transaction isn’t in line with the normal purchasing pattern, then it ends with what is being called a challenge flow. Accordingly, a requirement crops where one-time password from the buyer is needed to authenticate the payment. This is where the efficacy of the new version comes in, as the challenge flow is blended into the mobile checkout experience without redirects. Visa states that merchants can embed 3DS 2.0 into a web page or native application. One can customize the user interface elements (e.g., buttons, fonts, inputs) for all content for any challenge method used. The mobile SDKs will set up flows within apps. This indicates that a shopper won’t be required to finish the payment in a separate browser-based flow.
Assessing the impact
Merchants need to be alert about the fact that a refined 3DS 2 user experience alone won’t pave way for optimal acceptance rate. Merchants need to be clear about which transactions require authentication and which don’t.
Rodrigo Camacho, Chief Commercial Officer, Nethone, says merchants shouldn’t push 3DS for all transactions.
“At Nethone we have found that 3DS typically costs merchants anywhere between 2% and 3.5% in conversion rates in Europe and upwards of 15% in the Americas,” mentioned Camacho in a company’s blog post. “Typically we have seen that it’s only necessary to push 3DS to less than 8% of your traffic which will lower the impact on your conversion rates by more than 90%.”
According to another analysis, Ravelin’s data indicated that 3DS with improved user experience still lost 19% of payments.
Being prepared
When customers are asked to verify transactions, they are presented with a challenge flow. The challenge method that's used is determined by the issuer.
Visa’s recommends 3 UX principles:
As explained by Visa, three verification methods are as follows:

Also, a customer's purchase can be verified on the existing issuer app by entering sign-in credentials. Visa also states that since many iOS and Android users already have the ability to use fingerprint scanning to access their phones, it recommends using the same method to authenticate customers. Also, the team advises any biometric authentication is used in addition to a passcode. So if biometric authentication issues arise, the customer may switch to a passcode. Other methods of authentication are face recognition and voice recognition, which can be done directly via issuer app or via a connected device linked to an issuer app, such as a digital watch.
Other than UX, there are technical details that also come into play. According to Adyen, these are the front-end libraries (to securely collect and transmit device information, as well as to display authentication flows) and the 3D Secure server. Both work together to exchange information and request authentication.
What to expect
Sasha Pons, Product Director at Ingenico believes that the deployment of the new version of 3DS is going to be an iterative process, shaping up as version 2.1, version 2.2 and so on.
“Such a huge shift in the way merchants collect and share data will not happen overnight. There will be a period of adjustment, and you can take some comfort in the fact that many merchants like you will be going through the same thing,” Pons mentioned in a recent blog post. “What 3DS v2 asks of merchants will change as the practical realities of the new standard become clearer.”
He expects the particular rules around the format, and quality of data needed will evolve as the time progresses.
Check upcoming Ai Conferences dates or
Follow Ai on Twitter: @Ai_Connects_Us

First Published on 18th August, 2017
Ai Editorial: Airlines need to be realistic about the flaws and limitations of the rules-based systems - mainly on their hindrances to scalability and restrictions to instant delivery, writes Ai’s Ritesh Gupta
The shortcomings of the traditional rules-based approach for fraud prevention continue to get highlighted. At a time when the efficacy of fraudsters and hackers in cracking areas of vulnerability is on the rise, it is imperative for merchants to improvise and sharpen rules on the fly.
Before discussing problems associated with the traditional rule-based fraud method, it needs to be underlined that there are more refined ways of ensuring a genuine travel shopper’s experience doesn’t get hampered. Overall, it is must for merchants to identify user behaviour much more accurately, which is useful not only in turning away fraudulent transactions, but also in identifying positive behaviour (genuine customers, especially big ticket spenders) to allow them to pass through. In addition, taking away rules, buying restrictions, 2FA or other difficult verification procedures increases the shopping experience for users, therefore lowering cart abandonment rates.
Merchants can’t be risk averse
The problem with deploying hard rules and relying on manual reviews is the fact that this method tends to work around evaluating the typical fields.
So how does a fraudster manage to break the rule and find a way out? How do they manipulate and defeat the system?
For instance, a system has been set in a way that it doesn’t allow more than 4 transactions in 60 minutes. In this case, fraudsters have figured out the stipulated rules and one of them being a duration-based rule. Then an attempt is made to craft their program in a way that the same will confront the system and not interfere with the rule.
There are certain rules systems that initially seem easy to comprehend, indicating which orders will be accepted, rejected, and reviewed. These are enough to detect simple, non-changing, known patterns. But as the need arises to add more rules, probably hundreds of them, to be clear with what’s genuine and what possibly could be fraudulent then even an astute executive may find it an arduous, tedious task to sort out the overlap with increasing number of rules and taking time out for manual reviews. The moment more time needs to be spent in curating and arranging rules, how each rule is faring, what sort of permutations and combinations are not working, what is the impact on the average order value, the threshold of the limit set etc. then the job becomes tedious. Even in case a point system is followed for rules, then also it can be a gruelling task.
In one of their blog posts, Accertify asserted that all channels and products aren’t alike when it comes to fraud risk. Citing an example, the team stated: Rules may include IP address velocity but an IP address from a provider of telecommunications services like Verizon isn’t as user-specific when compared with Comcast. So if there is a doubt for one IP address, then velocity could be adjusted, but maybe not for mobile. So there is a need to apply rules specifically for certain channels and product lines while countering threats.
Rules that are based on a single channel behavior don’t pave the way for a complete picture of the shopper’s activity across multiple channels.
Find a way to ensure that erroneous and feebly coded rules don’t end up stepping up manual review queues.
In this context, the efficacy of machine learning offerings is coming to the fore, when compared with rules-based systems. Predictive analytics is a part of supervised learning in machine learning, and plays a part in predicting whether a cyber-criminal or a fraudster will repeat their act again in the future. At the same time, other types of machine learning – unsupervised learning – also have a role to play.

So what needs to be done?
Even in case of machine learning, it is vital to distinguish between the various kinds of techniques deployed. Rather than just focusing on predictive analytics, there is a need to bank on pattern recognition, deep learning and stochastic optimization. Why? Because, if by focusing only on predictive analytics, there could a gap for the fraudster to capitalize upon. What if a new threat surfaces with no previous data? Unsupervised machine learning is able to seek patterns and correlation amidst the new data collected, which helps to identify positive and negative behaviour, and is effective in identifying genuine customers as much as identifying fraudsters.
To increase the effectiveness of the fraud system, another form of machine learning must be used as well – pattern recognition.
If an entity is heavily following rules-based methodology, then the main KPI would be to cut down the fraud rate as close to zero as possible. At the same time in many borderline genuine transactions would fail to pass through.
Rather the focus needs to be on - rely on an algorithm to make decisions to optimize sales as much as possible while keeping fraud and chargeback rates under control.
Go beyond rule-based prevention
Rules cannot keep pace with the degree of data and variety of always-evolving fraud that exists as of today. Do count on algorithm-oriented modelling. Assess how to make the most of business rules based on input from fraud specialists and machine learning classifiers, and bank on risk scores in real time to identify high-risk transactions. How to track users across identities, devices, IPs and locations? Is there a mechanism to combat proxy detection?
Also, as we highlighted in our recent articles, airlines are being recommended to focus on industry data and unique merchant data to combat fraud.
Rather than hard rules, airlines should direct fraud prevention efforts on behavioural analysis instead, which is compatible with all various payment methods, currencies and devices. And a further step in sustaining or even improving conversion rates for airline can be to develop a decisioning algorithm with the mandate of maximising revenue at an optimal level of fraud risk. This will make the airline’s fraud prevention methods truly agile at maximising revenue while minimising fraud.
How is machine learning helping in combating fraud? Hear from industry experts at Ai’s 6th Airline & Travel Payments Summit Asia-Pacific, to be held in Bali (29 – 31 August). For more info, click here
Follow Ai on Twitter: @Ai_Connects_Us

Developments related to chatbots continue to intrigue. Not too long ago the utility of chabots was being questioned, about their ability to understand tone, language and intent or the value they can offer. And today certain travel companies, including established airlines, are gearing up to accept payments within the conversational/ messaging interface, and hence calling them transactional chatbots. So are AI chatbots finally living up to their intelligent branding?
The situation needs to be assessed from the perspective of who is the real user of such offering? It is already being pointed out that the mobile-first lifestyle or the tendency to interact with a connection via a messaging platform, especially in the case of a “Millennial”, is one major driving force.
So be it for a conversational travel insurance chatbot or a flight search chatbot, the use of artificial intelligence to interact with travellers in a conversation style is on the rise. And expectedly, “seamless” payment option via chatbots is emerging as a possibility. As Kaivalya Paluskar, Solutions Consultant, APAC, Ingenico ePayments mentioned, the users largely have been redirected to a new page till date, but now this is evolving gradually.
What it means is – the user would never be sent to a website to finish the transaction.
The team at Ingenico has worked on what it describes as an “in-built” solution, where the user “doesn’t go out of the chatbot to make the payment”, said Paluskar. “We can facilitate this for different platforms, including Facebook or any open API platform,” he said.
According to specialists, there could be an instance, where microsite opens when a user attempts to make the payment for the first time, but that would be just a one-time occurrence. Consequently, the user would remain within the chatbot interface for completing transactions.
Airlines are relying on partners to step their capabilities in natural language processing, and accordingly, stepping up the user experience.
Follow Ai on Twitter: @Ai_Connects_Us

Travel companies have survived a massive threat to their existence by learning to share sensitive data on a private cloud platform, writes our guest columnist JJ Kramer, Chairman of Perseuss Steering Group
A decade ago, airlines operated in the usual silos of secrecy. The competitor was always the enemy. The competitor was trying to eat the other airline's lunch. But gradually they realised they had a common enemy who was trying - and frequently succeeding - to eat everyone's lunch.
The fraudster
International fraud gangs, stealing and abusing credit card data, were repeatedly ripping off one airline after another. Profitability was plunging and the situation was getting worse. The need to share information about fraudsters, legally, became apparent to all operators.
Now in 2017, travel companies are in a much better place. They have identified that there are four key steps which industries must take to fight fraud. Curiously, they all involve trust.
1. Build trust in your peers
The fightback started with pairs of airlines sporadically meeting to swap experiences and known fraudster information. Personal relationships formed and the trust between them became the bedrock of further progress.
Among those pioneers was JJ Kramer, Chairman of Perseuss Steering Group. According to him, fraud thrives in an atmosphere of fear and mistrust. People have to start the fight against fraud by building a new atmosphere of cooperation and confidence in each other. Essential to building that trust was people meeting in person, not online. Face-to-face meetings were vital.
2. Build trust in your platform
But things soon begin to grow and it was obvious that the use of technology was also necessary. Who could be trusted to build the infrastructure? Who would govern it?
The airlines eventually selected an independent IT company with known competence to build the platform. The platform had high levels of security and the user community was 'members only'.
Airline fraud analysts were able to submit data about known fraudsters and check suspect data against the database. Fraud was being identified and reduced.

3. Build trust in your data
The airlines can now tackle the small, but important matter of the data. Who owned it? Who controlled it? Was shared data owned by everyone, once it was pooled? Chairman of Perseuss Steering Group stated that it was commonly agreed that fraud data is owned by the company who submitted it and it can be deleted by them at any moment. This decision increased the community's trust in the platform because they knew they controlled it, not the other way round.
4. Choose leaders you know and trust
As the user community grew to over 100 companies and welcomed in non-airline businesses (like online travel agencies, railway and retail companies), the management team adapted itself. A Steering Group was formed. People chose representatives they had met in person, regardless of the size of the company they worked in. Personal contact was, again, an important aspect that was taken into account. As Chairman of Perseuss Steering Group mentioned, the Steering Group channels and prioritizes the development process and that is the proof that the users are in charge, not anybody else.
About Jan-Jaap Kramer
JJ has been involved in the battle against airline card fraud for over 15 years. In his previous role as Manager Cashier Department/Credit Cards for Dutch airline Martinair (a subsidiary of KLM Royal Dutch Airlines) from 1999 to 2011 he was responsible for the security of the company's ecommerce and call centre passenger bookings. In 2011 he established his own consultancy company to help business and industry fight fraud. Soon after that he was elected chairman of Perseuss, the travel industry’s anti-fraud organization.
About Perseuss
Perseuss is the global travel industry’s own solution to the battle against fraud. It was founded in 2008 by a small group of airlines and soon became an industry standard for data-sharing. Today, the community has participants from around the globe including airlines, travel agents, railway, and retail companies. Its flagship offering is an online shared negative database, recently updated to include email age verification and artificial intelligence. It also operates FraudChasers, an online forum for anti-fraud professionals. Perseuss plays a major role in cross-border police Action Days to apprehend fraudsters.

First Published on 25th January, 2018
Ai Editorial: Companies can defend themselves adequately by using a tool like machine learning, and at the same time there needs to be reliance on rules and the human component as well, writes Ai’s Ritesh Gupta
Data breaches and compromised credentials are on the rise, and the task of a Chief Security Officer (CSO) or Chief Information Security Officer (CISO) is becoming more challenging to safeguard against takeover of loyalty accounts.
According to a recent study by Connexions Loyalty, travel accounts could be quite valuable on the dark web (airline loyalty accounts: $3.20-$208 each).
As Sift Science highlighted in one of our recent articles, in most likelihood, every one’s credentials have already been compromised, and it is imperative for e-commerce companies to strengthen the “authentication” aspect, and damage can be controlled as far as account takeover (ATO) or gaining access to a loyalty account is concerned.
And one of the main tools for the same today is machine learning.
Kevin Lee, Trust & Safety Architect, Sift Science says finding unknown unknowns is a key to making machine learning powerful. “If you are creating a rule, it is typically being created because there has been a mishap in the past. So rules are created with certain parameters. It is very tough to create one-off rule – say number of clicks on a particular item, over $100, with a particular contact number, email id and block it or allow the user to redeem it, then one can get buried in such circumstances and gets difficult to figure out the performance. The trouble with that is fraudsters are literally being financially incentivized to reverse engineer those systems. In the case of machine learning, it creates a more complex scenario making it more challenging to reverse engineer.”
Lee, a speaker at the recently held Loyalty Fraud Workshop in Palm Springs, California, added that machine learning can look at the entire span of an account and look for anomalies. A human analyst’s capabilities are restricted, evaluating a certain number of signals at a time and come up with a verdict. “But there is enough data out there and that’s really when machine learning comes into play. With thousands or tens of thousands of members in a loyalty program, machines become smarter and identify anomalies (in usage of accounts or user behavior).” So by identifying anomalous areas within large data sets, one makes intelligent judgments accordingly.
Efficacy of machine learning
Companies can defend themselves adequately by using a tool like machine learning, and at the same time there needs to be reliance on rules and the human component (intervention and feedback) as well. “All of this works together in conjunction to deliver the best results,” said Lee. Other than putting in place strong measures for authentication (related to accessing accounts), Lee recommends that there needs to be analysis to assess whether there is any problem with the system yet. What is the current level of account takeover on the platform? “What sort of data are companies tracking and measuring? And this isn’t related to fraud or ATO purposes, but in general. So many organizations don’t have grasp over their own data. So it becomes tough to assess how big the problem is. So the first area that needs to be assessed is around data quality and data volume in terms of how clean that is,” he said. Once a virtuous data pipeline is in place, it can be built upon with machine learning models, with rules, and create tools to help the team analyze the ATO problem.

Crafting a holistic picture
How about data from airlines specifically? Lee said this is a crucial area. There are signals that fraud prevention specialists lookout for. And this is just not related to transactions, but also about buying pattern, post booking behavior etc. With the data collected, one can churn the data through various permutations and combinations to identify potential fraud patterns that may be left behind by fraudsters, who have made micro-changes between transactions in one coordinated fraud attack to trick the system. Using real time pattern recognition, even micro-changes can be proactively identified and tagged to the same fraud pattern group.
The data that Sift Science leverages includes attributes associated with the identity of a user, behavorial (browsing patterns, keyboard preferences etc.), location data, device and network data, transactional data, decisions (business actions taken), 3rd party data (geo data, currency rates, social data etc.) plus custom data that is specific to a particular merchant.
A couple of examples:
· On-site behavior: Site data including mouse cursor movements or every single step of that journey is collected and analyzed to reveal insights into users’ traits. It can all be relevant information collected and used. “With enough data it can be observed that the average person – when they redeem gift cards or loyalty points, most likely that’s not their first time. People tend to take their loyalty program or points/ miles seriously. Even before the transaction takes place, with machine learning one can map the holistic behavior. So one keeps on checking a particular redemption option and when they have enough currency, they go for it. It might take them months to complete this. So these are all good indicators. On the other these are missing in account takeover (instances),” said Lee.
· Post transaction behavior: So let’s say if a ticket from an airline or an OTA has been bought or redeemed, a legitimate user can email the same or share itinerary with their family or friends. “But in case of a fraudster this generally doesn’t happen,” said Lee.
“A city pairing, time of the day, seasons…there could be a flight booking that might be risky, and another might not be risky at all. So a combination of factors can come into play,” said Lee.
The team has also worked on a set of capabilities that enables one to build custom fraud processes with less code.
Types of machine learning
The power of machine learning is still in the supervised state, asserts Lee. Typically, supervised machine learning focuses on a cycle of training, predicting, and acting stages. “(The industry) is still sometime away from functioning in an unsupervised way,” he said. When you have humans involved or there are known “bads” such as chargebacks, the system can learn quicker in such supervised environment. “Unsupervised machine learning tends to be less accurate (in comparison). It is lower maintenance of course.” Sift Science uses an array of predictive models, including ones specific to a business plus network models because spotting bad behavior on one site helps to identify it on other sites as well.
As for not being vulnerable to new types of fraud attacks, companies like Sift Science look at how fraudsters are trying to break existing system controls and rules. So with reference to finding a way to attempt a fraud via email id or address by to circumventing the controls enforced, data normalization coupled with n-gram analysis extracts the key substrings in the data field to identify repeatable data patterns. And that’s one example of how machine learning plays it part.
For Ai’s 2018 Events, check - www.aieventdates.com
Follow Ai on Twitter: @Ai_Connects_Us