235 systems total
Here you can find news gathered from systems presented in our directory
View all news of Gocardless
Posted on June 4, 2019
How do consumers want to pay for subscriptions, memberships, bills and instalment plans? The answer to this question can hold the key to conversion rate optimisation for businesses. Knowing which payment methods your customers are comfortable with, and which they actively prefer to pay with, can have a significant impact on revenue generation for businesses taking recurring payments.
We partnered with YouGov to ask 12,785 consumers across the UK, France, Germany, Spain, Denmark, Sweden, USA, Canada, Australia, and New Zealand what their payment preferences are in 2019. The research covers four typical recurring purchase use cases: traditional subscriptions, online subscriptions, household bills, and instalments. Together, these markets represent more than two-thirds of the world’s recurring payment volume.
Want the full report? Download your FREE copy here. Or watch Duncan Barrigan, VP Product at GoCardless, deliver the foreword on video below.
As the data shows, the answer to this question depends on where your customers live. If you take a look at existing research, for example, you’ll see that cheques are more relevant in America relative to the rest of the world. While in China, mobile payment transactions are huge. Cultural and technological landscapes can differ drastically from country to country. And these can heavily influence consumers’ payment preferences.
Let’s take a look at some key insights from the data…
On average, 40% of Germans are ‘very unlikely’ to use credit cards (and 29% feel the same about debit cards).
Canadians are more likely to make recurring payments on credit cards than any other nationality surveyed. 27% are ‘very likely’ to use them for traditional subscriptions (such as gym memberships) and 26% are for household bills and online subscriptions.
Americans are almost as keen – credit cards are their preferred method for both traditional and online subscriptions.
Outside North America, consumers in every country surveyed said they were ‘very likely’ to pay their household bills the traditional way – by Bank Debit (or as Europeans tend to call it - Direct Debit).
In 9 out of 10 markets surveyed, around one-third of consumers said they were likely to choose Bank Debit to pay for online subscriptions. And yet, looking at the top 44 global subscription websites, only 1 offered Bank Debit as a payment option (and only in Germany).
In Denmark for example, an average of 44% of consumers were ‘very unlikely’ to use a digital wallet to make any kind of recurring purchase. Whereas in Spain, an average of 16% were ‘very likely’ to.
For B2C companies, understanding consumer purchasing preferences is critical. Having been given a wide choice of payment methods, consumers are minded to exercise it - and they may well go elsewhere if they can’t pay the way they want.
If there’s anything to take away from this research, it’s that there is no ‘one size fits all’ global offering for recurring payments. There is no single solution that will suit all your customers across every country, income bracket or age group. But armed with the right data, you can implement the best solutions for your business, and optimise your conversion.
View all news of Gocardless
Posted on June 4, 2019
Comment les consommateurs veulent-ils payer leurs abonnements, leurs factures et leurs versements ? La réponse à cette question pourrait bien être la clé de l’optimisation du taux de conversion pour les entreprises. Connaître les méthodes de paiement avec lesquelles vos clients sont le plus à l’aise, et celles avec ils préfèrent payer, peut avoir un impact considérable sur la génération de recettes pour les entreprises encaissant des paiements récurrents.
Avec l'aide de YouGov, nous avons sondé 12 785 consommateurs sur 10 marchés (Royaume-Uni, France, Allemagne, Espagne, Danemark, Suède, États-Unis, Canada, Australie et Nouvelle-Zélande) pour découvrir leurs préférences de paiement en 2019. L’étude portait sur quatre types de paiements récurrents : abonnements traditionnels, abonnements en ligne, factures (charges) et versements. Nous avons sélectionné ces marchés car, à eux tous, ils représentent plus des deux tiers du volume de paiements récurrents dans le monde.
Le rapport indique clairement que tout dépend du pays des consommateurs. Par exemple, les paiements par chèques sont beaucoup plus importants en Amérique que dans le reste du monde. La Chine, quant à elle, privilégie fortement les paiements mobiles. Les paysages technologiques et culturels varient énormément d’un pays à l’autre et ils influencent beaucoup les préférences de paiement des consommateurs.
Voici un aperçu de ce que ce rapport nous a appris :
En moyenne, 40% sont « très peu susceptibles » d’utiliser des cartes de crédit et 29 % des cartes de débit.
Les Canadiens sont plus susceptibles d’effectuer leurs paiements récurrents par carte de crédit que n’importe quelle autre nationalité interrogée : 27 % sont « très susceptibles » de les utiliser pour des abonnements traditionnels (salle de sport par exemple) et 26 % pour leurs factures et abonnements en ligne. Les Américains sont presque aussi enthousiastes : la carte de crédit est leur méthode préférée pour les abonnements traditionnels et en ligne.
En dehors de l’Amérique du Nord, les consommateurs de tous les pays sondés ont déclaré qu’ils étaient « très susceptibles » de payer leurs factures de la manière traditionnelle : par prélèvement bancaire (aussi connu sous le nom de prélèvement automatique en Europe).
Sur 9 des 10 marchés sondés, environ un tiers des consommateurs ont déclaré qu’ils choisiraient probablement de payer leurs abonnements en ligne par prélèvement bancaire. Si l’on se penche sur les 44 des principaux sites web d'abonnement mondiaux, un seul propose le prélèvement bancaire en tant qu'option de paiement (et uniquement en Allemagne).
Au Danemark, 44 % des consommateurs ont répondu qu’il était « très peu probable » qu’ils utilisent cette méthode pour des achats récurrents. À l’opposé, 16 % des consommateurs espagnols ont déclaré être « très susceptibles » d’y avoir recours.
Comprendre les préférences d’achat des consommateurs est essentiel pour les entreprises B2C. Les consommateurs ont désormais un vaste choix de méthodes de paiement et ils sont disposés à l’utiliser ou aller ailleurs s’ils découvrent qu’ils ne peuvent pas payer comme ils le souhaitent.
Ce qu’il faut retenir de ce rapport, c’est qu’il n’existe pas de méthode globale « one size fits all » (ou approche unique) pour les paiements récurrents mais il n’y a pas non plus de solution unique qui conviendrait aux consommateurs de tous les pays, toutes les tranches de revenus ou tous les groupes d’âge. Avec les bonnes informations, vous pouvez mettre en place les meilleures solutions pour votre entreprise ainsi qu’optimiser votre taux de conversion.
View all news of Gocardless
Posted on May 31, 2019
Merchant Risk Council events are some of the best places to discuss the challenges and opportunities facing the payments sector, from open banking and borderless payments to blockchain and virtual marketplaces.
One topic that dominated this month's Merchant Risk Council London was Strong Customer Authentication (SCA), part of the PSD2 regulations. SCA is the new regulation for authenticating online payments, and will be rolled out across the European Economic Area (EEA) on the 14 September.
(Note: On 13 August 2019 the Financial Conduct Authority (FCA) confirmed that enforcement of SCA in the UK will include a phased 18-month implementation, starting on 14 September 2019 and ending March 2021.)
SCA has been put in place to increase security for payers and reduce payer fraud as well.
SCA consists of a 2-factor authentication that will apply to all electronic (online) payments.
We talked about SCA with players from across the payments landscape, both at the event itself and at a roundtable dinner we hosted on the topic.
This article looks at three key topics that emerged from those discussions.
It’s hard to dispute the core aims of SCA - reducing fraud and making online payments more secure. Everyone we met was in favour of taking a principle-based approach to achieving these goals and was excited by the potential for innovation that this allows.
However, there was widespread frustration at the lack of consistency in the depth and granularity of the regulation, which some attributed to the pressure that regulators face from well-funded interest groups. In turn, this could lead to an inconsistency of implementation.
The 14 September deadline is fast approaching and few - if any - of the online businesses we spoke to felt ready for the change. While C-suite executives seem to have recognised the size of the potential impact that SCA could have on conversion, there is a broad lack of clarity about what issuers and PSPs could (and should) be doing to mitigate this.
Businesses are also divided on their readiness to communicate to their customers. While some are actively communicating, others worry that reaching out proactively will spook their users into churning.
Many businesses are still waiting to take action on SCA and the communication approaches of those that are taking action differ widely. The reason for these differences stems from the very real worry that making the necessary changes could lead to a significant drop in conversion.
As mentioned above, the point of SCA is to reduce fraud and make online payments more secure. The question is whether payers really value this over the convenience of existing checkout flows.
While we know that customers want convenience and a lack of friction, and regardless of the intentions of SCA, the new regulations add an extra barrier to successfully completing a transaction. Nobody knows for certain that customers want, or would even tolerate, more security measures than are already in place.
In a recent GoCardless survey of 1,000 payers, 45% of those asked would be ‘frustrated’ by a more secure but more lengthy checkout process when buying from their favourite brand, while an additional 23% would go as far as shopping with them less often because of it.
A common point of reference for businesses is a similar legislation rolled out in India in 2014. Local businesses reported an overnight drop in conversion of 25%. Companies affected by SCA are understandably keen to avoid a similar scenario.
With SCA likely to remain on the agenda for the foreseeable future, the best course of action is to keep the industry-wide discussion flowing, especially as more businesses begin to take note and begin developing appropriate strategies.
GoCardless will be attending Money 20/20 Europe in Amsterdam on the 3-5 June. If you’d like to talk to us more about SCA and the wider payments landscape, you’ll find us at booth K103 in Hall 1.
View all news of Gocardless
Posted on May 1, 2019
Last month we announced our new scheme launch with Zuora in Australia, and this month we’ve turned our attention to the Nordics. We’re excited to announce that Zuora customers in Sweden and Denmark can now take recurring payments with GoCardless via Autogiro and Betalingsservice!
GoCardless has been processing payments in the Nordics since 2015, and with the addition of Autogiro and Betalingsservice to our Zuora partnership, we’re uniquely positioned to support the growth of the Nordic subscription economy and solve the common issues that enterprise businesses face through collecting recurring payments.
John Phillips, Managing Director, EMEA, Zuora, said: “Our mission is to enable all companies to be successful in the subscription economy. Collaborating with GoCardless means we can offer our customers, and their customers, more flexibility and scalability when it comes to subscribing to new goods and services."
Bg Autogiro is a Direct Debit scheme for collecting Krona-denominated payments from a bank account in Sweden. It's the Swedish equivalent to SEPA Direct Debit in the Eurozone and Bacs Direct Debit in the UK.
Bg Autogiro is a popular method for collecting payments in Sweden. Approximately eight out of ten Swedes use Autogiro with the average customer paying five to seven bills per month via Autogiro.
What are the benefits of Autogiro?
Autogiro payments are bank-to-bank. Autogiro operates through the Bankgirot clearing scheme, rather than through the card networks. This has significant benefits when taking payments on an ongoing basis:
Betalingsservice is the Direct Debit scheme within Denmark, it enables merchants to automatically collect Krone-denominated payments from their customers.
What are the benefits of Betalingsservice?
The Betalingsservice clearing scheme avoids the need to use any card networks. Operating in this way can be beneficial when taking payments on a regular basis, for example:
GoCardless’s Head of Global Enterprise Partnerships, Karl Stjernstrom said: “As a global payments provider, we know that businesses need scalable solutions to effectively manage recurring payments across multiple territories. We’re really excited to be growing our partnership with Zuora through these new scheme launches to help businesses collect payments through Bank Debit from their customers across the Nordics”.
Our latest integration with Zuora is another key milestone in our efforts to solve the common problem of late payments around the world. With BECS NZ, PAD in Canada and ACH in the USA also soon to be added, it’s another step towards our mission of creating a truly global bank to bank payments network.
GoCardless for Zuora is now live in the following countries and schemes:
Country | Scheme | Currency |
---|---|---|
Eurozone | SEPA | EUR (€) |
UK | Bacs | GBP (£) |
Australia | BECS | AUD ($) |
Sweden | Autogiro | SEK (kr) |
Denmark | Betalingsservice | DKK (kr) |
To find out more about enabling the GoCardless integration simply schedule a call to speak to one of our team.
View all news of Gocardless
Posted on April 30, 2019
“Uncertainty is an uncomfortable position, but certainty is an absurd one” – or so said the French philosopher Voltaire.
Uncertainty in business is to be expected. But for many UK-based businesses with global reach, the hyper-uncertainty caused by Brexit over the last three years has been particularly uncomfortable.
As a business with 37+ nationalities, a strong European employee base and an international outlook, the decision to leave the EU came as a surprise to many here. But come 24 June 2016, we set to work researching, scoping and contingency planning.
From the start, our guiding principle was to prepare for leaving the EU without a deal – to ensure no disruption to customers, while continuing apace with our international expansion.
The UK’s political ties to Europe may have been slowly unravelling, but we stayed resolute in our belief that connectedness is the way forward for GoCardless and our customers.
It quickly became obvious that the only way we could guarantee zero disruption to our customers, was to acquire a second licence to provide regulated services in another European Member State.
We researched options for the licence – assessing countries on over 30 dimensions including cost, regulatory requirements and speed of setup. We consulted law firms, regulators and consultants – and ultimately decided on France.
France represents our largest customer base outside the UK, and is the home of our first international office. Having made our decision, in the summer of 2018, we began our application for a licence with the French regulator.
We realised early on that we would need to make a series of changes to our product in order to manage our customers’ money differently. In particular, to meet the regulatory requirements of providing services under two licences to a set of customers that was, up until then, served under one licence.
Our product operates on top of a number of bank payment schemes, including Bacs in the UK, SEPA in Europe, PAD in Canada and BECS in Australia. Over the last decade, we’ve been on a mission to join together these schemes into one network for our users. Building a new scheme is a sizeable piece of work for our banking operations, product, development and support teams. In some cases, it can take years.
Planning for a no-deal Brexit was like building a completely new scheme on steroids, knowing we may never have to use it.
We also started work to guarantee we could continue to process payments using the European Direct Debit scheme, SEPA, by integrating with Barclays’ European Hub bank in Germany. Some of this work turned out to be unnecessary, since on 7 March this year, the European Payments Council (EPC) ruled that the UK could remain in SEPA, regardless of the outcome of Brexit.
The hardest business decisions we’ve had to make around Brexit have been around timing. After an initial period of intense research, the work we had to do was relatively clear; knowing when to press the button was anything but.
From one day to the next, the likelihood of a no-deal scenario appeared vastly different. Deciding when to schedule the work, among the long list of other priorities, was a continual challenge.
Three years on from the referendum and we still don’t know if, how and when we’re going to leave the European Union. Thankfully, whatever happens, GoCardless is prepared.
It’s taken 20+ people in seven different teams, working more than 4500 hours – over 2 years in working days – to prepare for a no-deal Brexit. In the process, we’ve learned lessons about being more forward-looking, and some of our teams have worked more closely together than ever before.
The work we’ve done may never see the light of day. But ultimately, like many other service businesses, we just couldn’t take the chance. We had to make the uncertain, certain.
View all news of Gocardless
Posted on March 20, 2019
The Chancellor, Philip Hammond, last week announced plans to “tackle the scourge of late payments” to small businesses, including a requirement for listed companies to report on their performance in annual reports and accounts.
So, how did we arrive at this point and what needs to be done?
When legislation that requires businesses to report on their payment practices was first passed, there was lots of uncertainty as to whether it would help clamp down on late payments at all, and whether that legislation would be the extent of the Government's efforts.
It is now clear that the Government has not dropped this issue from its agenda. This is great news as late payments have been a thorn in the side of SMEs for many years, with many big businesses accused of treating their suppliers as an additional line of credit. Recent GoCardless research found that more than 80% of small business owners say payment uncertainty means they can’t grow their business the way they want to
Bacs Payment Schemes has calculated that 5.7 million SMEs in the UK are owed about £13 billion in total. Research from the Federation of Small Businesses (FSB) suggests that tackling the problem could add £2.5 billion to the UK economy and keep an extra 50,000 businesses open each year.
These stats are backed up by Small Business and Productivity, published by the Business, Energy and Industrial Strategy (BEIS) committee.
To quote the report, late payments “inhibit the ability of SMEs to grow, affecting overall UK productivity. Late payments are built into the business models of too many companies, leading to many SMEs losing staff, profits and their businesses. This is totally unacceptable, unfair and constitutes a particularly disgraceful form of market abuse."
The Chancellor wants companies to nominate a specific non-executive director to take responsibility for the supply chain. Under Business Secretary, Greg Hands, the Department for Business, Energy and Industrial Strategy (BEIS) will lead the consultation and introduction of the new regulations for listed entries – though details of the time frame involved have not yet been revealed.
Could, then, the end of late payments finally be in sight?
The FSB certainly seems to think so. Its National Chairman, Mike Cherry, said: “Poor payment practices by big businesses towards their smaller suppliers are rife and pernicious. Four out of five small businesses have been paid late, and we told the Chancellor that this was the moment to act, to tackle this scourge once and for all.”
“The end of late payments can’t come soon enough, to bolster small businesses at a time when they are in great need of support and a lift in confidence.”
If we were to put a series of measures on the table, these would include all medium and large companies being required to sign the Prompt Payment Code (as outlined by the FSB's ‘Fair Pay, Fair Play’ campaign), and adopting statutory payments terms of no more than 30 days. And the Small Business Commissioner should also be given powers to fine persistent offenders.
There must also be a board level conversation and all businesses need to step up to the plate and be open and transparent. After all, the economy thrives when small, medium and large businesses work together.
It's clear, though, that there will be no quick win here. Earlier this year, GoCardless updated on the duty to report on payment practices and performance legislation and we found that although progress was being made – challenges and issues remained and the initiative had to be viewed as a stepping stone to greater measures to protect smaller organisations.
With this in mind, the claim that the Chancellor’s plans will mark the end of late payments seems somewhat optimistic. In the absence of statutory maximum payment terms and/or fines for late payment, there will always be a risk that SME suppliers will be forced to accept lengthy payment terms and/or face a delay in getting their invoices paid.
It seems almost inevitable that it will take longer for the situation on the ground to improve by continuing to focus on promoting better practices through transparency and reporting.
It will now be interesting to read the response that the Government is due to publish in relation to last year's call for evidence on ways it can eliminate bad payment practices.
All eyes will, therefore, be on how many of these proposals the Government will seek to implement.
View all news of Gocardless
Posted on March 17, 2019
Today we’re pleased to announce that Zuora is partnering with GoCardless to help Zuora customers take recurring payments in Australia!
GoCardless has been processing payments in Australia since 2018, and with the addition of the BECS scheme to the Zuora partnership, they’re uniquely positioned to support the growth of the Australian subscription economy and solve the common issues that enterprise businesses face through collecting recurring payments.
According to Iman Ghodosi, Zuora's GM Asia Pacific Region:
“Collaborating with GoCardless means we can offer our customers, and their customers, more flexibility and scalability when it comes to subscribing to new goods and services. By providing more payment options than ever before, companies are able to tailor their offerings to attract a wider range of subscribers, on more attractive and profitable terms.”
The Australian market has been experiencing a drastic shift in the payments landscape over the past few years, with more than two-thirds of Australian businesses planning to adopt a subscription model by 2018-2019, according to the ANZ Subscription Shift, a 2016 survey by Ovum for Zuora.
Additional findings from the Zuora survey include:
With an average of 44 Direct Debit transactions per person each year, Australian consumers use this bank-to-bank payment method for a wide range of payments including utility bills, childcare, sports club memberships, insurance, housing, and online subscriptions.
Additionally, businesses often pay via Direct Debit for their ongoing relationships with partners or suppliers, like marketing agencies, accountants or wholesalers. Between 2013 and 2016 the number of Direct Debits increased by 30% (Reserve Bank of Australia, Consumer Payments Survey, April 2017).
In Australia, the annual value of transactions processed through Direct Debit is almost 6 trillion AUD. This is 35.9% of all non-cash transactions (Bank of International Settlement, October 2017). In the past six years, the volume of Direct Debit transactions has increased by more than 50% reaching more than one billion in 2016.
A recent article from The Financial Review explores in more detail why Australians are ditching credit cards in favour of 'buy now pay later services'.
Direct Debit is the perfect solution for collecting payments, and it seems that customers in Australia agree; almost 36% of non-cash transactions were made through Direct Debit in 2017, with trends indicating that this number will grow in 2018.
Direct Debit has some key advantages over other payment methods. Designed specifically to handle recurring payments, Direct Debit is ideal for the subscription economy. It’s a bank-to-bank payment method, where the customer pre-authorises the merchant to collect payments directly from their bank account. It solves the problem of involuntary churn and drastically reduces the amount of time invested in managing the process through automating payment collections.
Unlike credit and debit cards, Direct Debit never expires, and bank accounts can’t be physically lost or stolen. What’s more, failure rates with Direct Debit payments are as low as 0.5% - and that figure is almost always due to the simple reason of customers having insufficient funds. GoCardless enhances Direct Debit, providing a payment solution specifically designed for companies using the subscription business model. The GoCardless product uses Direct Debit to support those key metrics driving success in the subscription economy.
The ability to offer Direct Debit as a subscription payment method in Australia is a significant milestone for Zuora, as they continue to grow in the region. A recent survey by Zuora revealed that 70% of Australian and New Zealand businesses are planning to make the shift to a subscription model in the next two to three years.
This move toward a subscription billing model is further supported by the same research, showing that Australians and New Zealanders spend an average of $660 AUD per month on subscriptions or recurring goods and services.
As GoCardless CEO and founder Hiroki puts it:
“As more and more businesses become international, they face endless frustrations in managing payments across multiple territories. Our partnership with Zuora has enabled us to reach an international market and offer our automatic payment solution to global businesses through one of the world’s most popular subscription billing platforms. We’re thrilled to be taking our partnership to the next level and look forward to bringing our solution to Zuora’s customers in Australia and beyond.”
The GoCardless integration with Zuora is another key milestone in our efforts to solve the problem of late payments around the world.
GoCardless for Zuora is now live in the following countries and schemes:
Country | Scheme | Currency |
---|---|---|
Eurozone | SEPA | EUR (€) |
UK | Bacs | GBP (£) |
Australia | BECS | AUD ($) |
View all news of Gocardless
Posted on March 7, 2019
We know there’s more to do when it comes to gender diversity in our team. Since last March, the number of women at GoCardless has grown by 3% to 37% overall – including a 7% increase in our tech teams (15%), 3% in management (32%), and 6% in women in senior leadership roles (20%).
Although numbers have gone up, we’re looking at how we can improve gender balance in areas where it lags the most – Product Development and senior leadership.
To help us achieve this, and because they believe in unleashing individuals’ potential, some of our women leaders came together to form a group focused on gender balance at GoCardless, sponsored by Catherine Birkett, our CFO.
Last year, one of our employee engagement surveys showed women at GoCardless perceived fewer career opportunities than men.
“We wanted to understand what was driving that perception,” explains Kasey, our Data Protection Officer. “As a first step, some of our senior female leaders came together as a group and planned an ‘unconference’ – a way to get bottom-up input from everyone across the business.”
At the unconference, we tackled some complex topics, like how focusing on women might affect the role of allies in driving change. Or that some of the barriers to women's success actually affect all colleagues, like the availability of mentorship programmes and flexible working.
At the end of it, we identified four focus areas which we hope will help us bring more gender balance to GoCardless:
Magda, Senior Operations Manager at GoCardless, has been involved in women’s groups before and has seen what an informal community that works towards the same goal can achieve.
Magda’s leading one of our four focus areas: to get more women to know about GoCardless and its culture, for example supporting recruitment efforts by engaging with organisations that focus on supporting women. Magda is also looking at creating a support network for women at GoCardless, by reaching out to other businesses, as well as youth groups and colleges.
“Connecting with other organisations is a way to change the perception of how young women see technology,” says Magda. This has been shown to have positive effects on women’s careers and even feelings of connection (HBR, 2018).
For us, flexible working means being able to adjust your working schedule for personal preference or need, including working remotely.
Nina Pepper, our Director of Demand Generation, says: “This doesn’t just impact women with children. Everyone needs flexibility for child commitments or it could be that you have an entirely different commitment or interest outside of work that could benefit from flexible working.”
As part of our focus on this, we’ve identified the areas that prevent people from taking advantage of flexible working.
Nina explains: “People need to feel able to work outside of ‘usual’ hours and management need to have the tools and the right attitude towards it. I look forward to seeing the impact of the initiatives and getting feedback from everyone at GoCardless about how we’re doing.”
Marta Lia Requeijo, our Money Laundering Reporting Officer, has experienced first hand the impact of positive mentoring and coaching.
“Throughout my career, I have been fortunate to be inspired by great mentors,” says Marta. “The positive, long-term impact of sharing life experiences and steps towards success was life changing for me.”
Being a mentee can help you feel more supported, give you fresh perspectives on career development, and help you build your network. But it also benefits mentors, who better understand what is happening with younger team members.
We’re working on a pilot for an official mentoring and coaching programme to launch later in the year – and in the meantime, our leaders have started informal mentoring of younger women at GoCardless.
We’re sharing what we’re doing internally and externally to raise awareness, surface issues and gain support.
We’ll be looking at how we communicate about gender diversity and engage on these issues with our colleagues and wider networks.
“Our messaging and communications are so important to success,” says Data Protection Officer, Kasey.
“We want everyone to know our purpose and mission, to understand what tools and resources are there to help, and to have someplace to go with questions, ideas or even rants. My goal is for all women (and men) at GoCardless to understand why this matters to us and what we’re doing about it.”
Like many groups that support gender diversity at work, we face challenges – from finding the time to dedicate to this, to making sure we have men from the business involved too.
But we’re delighted to get this conversation going, and to have a committed team of colleagues leading it.
“We hope that more male and female colleagues will participate, contribute to ideas and help us bring those ideas to life,” says our Director of Operations, Yasemin.
Senior Operations Manager, Magda, says: “I hope this will allow us to ask ourselves some hard questions around what the problems are, and what it really takes to make change.”
Nanna, our Director of Banking Operations, adds: “This is a chance for all the cool, talented women we have here at GoCardless to explore their ambitions and to see (and shape!) the career opportunities that are available.”
If you want to find out more about Diversity and Inclusion at GoCardless, check out our D&I page, or get in touch!
View all news of Gocardless
Posted on February 26, 2019
At GoCardless Postgres is our database of choice. Not only does it power our API, Postgres is used extensively in a number of internal services. We love it so much we just won't shut up about it, even when things go wrong.
One of the reasons we love it so much is how easily you can dig into Postgres' implementation when there are issues. The docs are great and the code exceptionally readable. These resources have been invaluable while scaling our primary database to the ~2TB we now run; no doubt they will continue to provide value as our organisation grows.
We at GoCardless believe that failure can be a great learning opportunity, and nothing proves that more than the amount we've learned from Postgres issues. This post shares a specific issue we encountered that helped us level-up our understanding of the Postgres query planner. We'll detail our investigation by covering:
We conclude with the actions we took to prevent this happening again.
One day we started seeing unusual query performance from several sources: both async workers and API requests seemed to take longer to respond than was normal. Examining the problematic database queries, the common factor was that each query touched our payment transitions table (almost half a billion rows) and associated relations.
Looking further, we saw a concerning number of queries that were very long-running that normally execute in under 50ms. Now suspicious that we'd hit a poor query plan, it was time to find an exemplar query and dig in:
SELECT *
FROM payment_transitions
JOIN payments
ON payments.id = payment_transitions.payment_id
WHERE payment_transitions.payout_id = 'PO00123456789Z'
ORDER BY payment_transitions.id ASC
LIMIT 1;
Debugging a query plan almost always follows the same pattern: take time to understand the query, identify why the plan you received is bad, then hypothesise an ideal plan that would be fast. That new plan often requires an index that is yet to be created. On the other hand, perhaps a fast plan doesn't exist for this query. Whatever the outcome, it's key to every step that you understand the shape of the data you're querying.
Our query references two tables, payments
and payment_transitions
. In this
system every payment has states it can transition through1, and each of those
states is represented as a row in the payment_transitions
table. We'll be
filtering on a foreign key of payment_transitions
called payout_id
which
marks that transition as having been included in a payout.
Approximately 20% of our payment transitions will be marked with a payout_id
,
and there are approximately 20 payment transitions per payout. We can reasonably
expect the number of payout_id
values to grow linearly with the size of our
payment_transitions
table.
Using approximate figures, if we have 350m payment transitions, we can expect
70m to be marked with a payout_id
, and there would be almost 3.5m distinct
payout_id
values in the payment_transitions
table. Finally, we have an index
on the payout_id
column that looks like this:
CREATE INDEX index_payment_transitions_on_payout_id
ON payment_transitions
USING btree (payout_id);
This should provide enough context for us to properly evaluate each potential plan for this query.
Using the query we'd pinpointed as problematic, we ran an EXPLAIN
in a
Postgres prompt to display the selected query plan.
EXPLAIN
SELECT *
FROM payment_transitions
JOIN payments
ON payments.id = payment_transitions.payment_id
WHERE payment_transitions.payout_id = 'PO00123456789Z'
ORDER BY payment_transitions.id ASC
LIMIT 1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Limit (cost=1.14..21700.47 rows=1 width=262)
-> Nested Loop (cost=1.14..58045700.29 rows=2675 width=262)
-> Index Scan using payment_transitions_pkey on payment_transitions (cost=0.57..58022604.77 rows=2688 width=262)
Filter: (payout_id = 'PO00123456789Z'::text)
-> Index Scan using payments_pkey on payments (cost=0.57..8.58 rows=1 width=14)
Index Cond: ((id)::text = (payment_transitions.payment_id)::text)
(6 rows)
This query includes a join operation between our payments
and
payment_transitions
table. Formally, a relational join is an operation on two
sets - R and S - which will produce a result consisting of all combinations of
tuples (tuple means a single row result) in R and S that are equal under a
particular matching condition.
When joining two tables, Postgres employs one of three
strategies:
merge, hash or nested loop. The strategy Postgres has chosen for our query is
nested loop, the most naive of join strategies. Nested loop joins will iterate
over every tuple in payment_transitions
and for each tuple scan the payments
table for tuples that match the join condition, which in this case is
payments.id = payment_transitions.payment_id
. Our result will be all the
tuples that satisfied our condition.
Looking at our plan, we're using the payment_transitions_pkey
to select each
transition tuple and for each transition that has a matching payout_id
, we'll
use an index lookup into the payments
table to perform the join. The advantage
of this query plan is that the first matching row we find using the
payment_transitions_pkey
index is guaranteed to match our query ordering2
constraint (ORDER BY payment_transitions.id
), so we can halt execution at this
point as we only require a single tuple (LIMIT 1
).
Sadly, this query plan is not going to work well for us. Recalling the
underlying distribution of this data, for every payout we expect there to be
approximately 20 matching transitions. If we assume that these matches are
evenly distributed3 throughout the payment_transitions_pkey
index (a fair
assumption for query planning purposes) then we expect to scan 1/20th of the
table before we find our first match.
At 350m rows, this amounts to 17.5m rows we need to scan, or about 20GB4 of data. This plan will never match the performance we expect from this query, so something has gone deeply wrong.
The calculations we just performed are very similar to how Postgres evaluates
query plans. In fact, the statistics we've been quoting are tracked and updated
regularly by Postgres through the auto-analyze process, and are
known as the statistic values n_distinct
and null_frac
.
Having a measure for each column of the number of distinct values (n_distinct
)
and fraction of rows for which the column is null (null_frac
) enables Postgres
to compute the expected number of rows returned for a given query as
approximately row_count * (1 - null_frac) / n_distinct
5.
Looking at the explained output of our plan:
-> Index Scan using payment_transitions_pkey on payment_transitions (cost=0.57..58022604.77 rows=2688 width=262)
Filter: (payout_id = 'PO00123456789Z'::text)
We see that Postgres expected that 2688 payment transitions would match our
filter condition on payout_id
. Assuming this is a typical payout (it doesn't
appear in Postgres' most common values5) then we've way over-estimated the
number of transitions attached to the average payout, which should be about 20.
When we look at our statistics for this column, we start to see some concerning
numbers:
postgres=# select attname, n_distinct, null_frac from pg_stats where tablename='payment_transitions' and attname='payout_id';
attname | n_distinct | null_frac
-----------+------------+-----------
payout_id | 25650 | 0.81
Plug these values into our formula from above and they imply that the number of
payment_transitions
we'll find that match our payout_id
is 350m * (1 - 0.81)
/ 25650 = 2592. We know from our manual calculations that the average payout is
associated with 20 payment_transitions
, so Postgres' estimate of the number of
distinct payouts is incorrect by two orders of magnitude (25k vs 3.5m). Such an
error will prevent Postgres from making sane decisions when comparing plans.
Our ideal plan would be to fetch all matching payment transitions for our
payout, then (knowing this will be a small number) perform an in-memory sort on
the results, returning the transition with minimum ID. The initial fetching of
matching transitions would be fast due to our
index_payment_transitions_on_payout_id
index.
The plan (with correct row estimations) would look something like this:
QUERY PLAN
-------------------------------------------------------------------------------------------------------------
Limit (rows=1)
-> Sort (rows=20) Sort Key: payment_transitions.id
-> Nested Loop (rows=20)
-> Index Scan using index_payment_transitions_on_payout_id on payment_transitions (rows=20)
Index Cond: (payout_id = 'PO00123456789Z'::text)
-> Index Scan using payments_pkey on payments (rows=1)
Index Cond: ((id)::text = (payment_transitions.payment_id)::text)
Materializing all matching transitions for most payouts will be quick and the subsequent sort cheap, as on average there will be so few of them. This is what we want the query plan to produce but our statistics meant Postgres vastly overestimated the cost of our sort, opting for the much more expensive primary key scan.
Postgres' planner has made a choice to use a query plan that could potentially
require far more data that the alternative, given it believes the chance of an
early exit - finding a payment_transition
with a matching payout_id
- will
be high. We can see in the planner code exactly why this has happened and how
the decision was made:
/* src/backend/optimizer/util/pathnode.c:3414
* 'count_est' is the estimated value of the LIMIT expression
*/
LimitPath *
create_limit_path(
PlannerInfo *root, RelOptInfo *rel, Path *subpath,
Node *limitOffset, Node *limitCount,
int64 offset_est, int64 count_est)
{
...
if (count_est != 0)
{
if (subpath->rows > 0)
// subpath->rows is an estimated row count of our
// source query, while count_est will be the LIMIT
// value when known. This means we're discounting our
// cost by the fraction of the source query we expect
// to use.
pathnode->path.total_cost = pathnode->path.startup_cost +
(subpath->total_cost - subpath->startup_cost)
* count_est / subpath->rows;
...
}
}
This code is taken from src/backend/optimizer/util/pathnode.c
which contains utilities to prune and optimise possible query plans (paths).
This extract is where we calculate the cost of a plan that has a limit, as this
will mean some optimisations might be possible.
The most impactful optimisation for limit queries is to exit early. If your query has a very small limit, then Postgres should stop execution as soon as the limit is satisfied rather than continue scanning all your data. When a limit is combined with an ordering, exiting early becomes possible for candidate plans provided they find rows an the order that matches the sort clause.
Recall that our query is asking for a single (LIMIT 1
) payment_transitions
row sorted by id
. If Postgres finds our payment_transitions
by searching the
payment_transitions_pkey
index then we'll find our results in order of their
id
, meaning the first tuple we return that matches our query's WHERE
clause
will satisfy our query and we can exit early.
The possibility of early exit means we can discount the total cost of any
already-sorted plans by count_est / subpath->rows
, as - assuming our matching
tuples are found evenly distributed across our search space - this is the
fraction of the plan we'll need to execute before we've produced all tuples we
require.
In our case, our count_est
is 1, and our subpath->rows
is high (2592) due to
our underestimation of n_distinct
for payout_id
. Small numerator and large
denominator means the discount is huge and is why the nested join through the
payment_transitions_pkey
index was chosen as the best plan.
As soon as we realised our statistics were causing such a poor query plan we re-ran an analyze to cause Postgres to resample. We had to do this a few times before our plans got better, which hints at the more concerning root cause of this problem.
When Postgres runs an analyze, it takes a sample of the table to use for generating statistics. There are several subtleties around how Postgres samples the table that can impact the accuracy of the tables statistics, and at present it's not possible to solve all of them.
The Postgres GUC (Grand Unified Configuration) variable
default_statistics_target
defines the default sample size Postgres uses for
computing statistics, as well as setting the number of most common values to
track for each column. The default value is 100, which means "take samples of
100 * 300 (magic number) pages when running an analyze", then sample randomly
from amongst the rows included in these pages.
But how large a sample is large enough? The n-distinct estimator used by Postgres is from IBM Research Report RJ 10025 (Haas and Stokes6), where the authors discuss the bias and error that is expected from the estimator given various sample sizes and underlying data characteristics.
In their analysis of the estimator, they note that it has been proven by Bunge and Fitzpatrick (Estimating the Number of Species: A Review7) that unbiased estimators do not exist when the sample size is smaller than the count of the most frequently occurring value in the population. The bias in these estimators is significant (anywhere up-to 80%) and small sample sizes will cause the bias to increase.
The estimator bias is always negative, meaning we estimate fewer distinct values
than are actually present - this could explain the underestimation leading to
our query plan malfunction. There can be anywhere up to 100k
payment_transitions
with the same payout_id
value, so at minimum we should
sample as many transition rows to guarantee we'll find more than one distinct
payout_id
value. As ~80% of payout_id
s are NULL, we require 100k / (1 - 0.8)
= 500k rows, or 1666 as a statistics target (recalling that Postgres multiplies
your statistic target by 300 to find the desired number of samples).
We bumped the statistics target for this column like so:
ALTER TABLE payment_transitions
ALTER COLUMN payout_id SET STATISTICS 1666;
And repeatedly ran analyzes, checking the n_distinct
value at each run. While
the values were slightly better than what we'd seen before we continued to see
large variation and massive underestimation. It wasn't until we bumped our
target to 5000 that the value became stable.
We expected that a moderate bump (say to 500) of our sample size would produce markedly better results than the default 100, but this turned out not to be the case. Careful thought about how Postgres generates our random sample lead to the conclusion that we were unduly biasing our estimator by taking a fair, random sample from a statistically biased selection of pages.
Postgres generates its samples in a two stage process8: if we want to collect a sample of 100k rows, we'll first gather 100k pages and then collect our sample from those pages. It is not the case that every table tuple has the same probability of appearing in our sample, as we're confined to the pages we selected in our first pass. Ideally this shouldn't be a problem, assuming column values are distributed independently amongst pages, but in our case (and we suspect many others) this is not true.
Our system creates all payment_transitions
for the same payout_id
in one
sweep. The payment_transitions
table is mostly append-only, so Postgres is
prone to place all those new transitions physically adjacent to one another,
sharing the same pages. If we take our random sample from a restricted set of
pages we've vastly increased the probability of sampling a value multiple times
in comparison to selecting from the entire table.
We can confirm this bias by using Postgres table samples to compute our statistics with a system strategy (approximates our analyze process) vs statistically fair sampling with bernoulli. A statistics target of 500 means we'd sample 150k (300 * 500) rows, which is 0.043% of our table. Using this sample size and comparing the two sampling methods we can see a stark difference:
postgres=# select count(*) as sampled,
count(distinct(payout_id)) as unique
from payment_transitions tablesample system(0.043);
sampled | unique
---------+--------
153667 | 11029
postgres=# select count(*) as sampled,
count(distinct(payout_id)) as unique
from payment_transitions tablesample bernoulli(0.043);
sampled | unique
---------+--------
153667 | 25351
Everything we've explained so far might have you asking why you would ever not use the maximum sample size. If our sample is the whole table then we can't have statistical bias, right?
Higher statistic targets will increase the amount of time required to perform an
analyze on a table. The Postgres auto-vacuum daemon is constantly triggering
ANALYZE
's in response to database activity, and the ACCESS SHARE
locks they
take on their table can block incoming ALTER TABLE
commands. This can be a
problem if you frequently want to change table structure, or don't take care to
timeout the lock to avoid blocking other incoming queries9.
One non-obvious consequence is how this affects upgrading your database.
Postgres major upgrades will reset all statistic values. If your database
depends on good statistics to perform (hint: it probably does!) then a full
ANALYZE
will be required before you can accept production traffic. The longer
your ANALYZE
takes, the longer you'll be down.
For this and other reasons you may want to hold off from dialing the sample size
to max. With the exception of the n-distinct estimator we've covered in this
post, our experience has shown Postgres to make very good decisions with normal
sample sizes. More varied data shapes than our payment_transitions
table may
be better suited to restructuring10 than hacking the statistics engine.
Postgres has a fantastic query planner that can help scale a database far beyond the size of the average organisation. It also provides great tools and documentation that can help you deal with most performance issues that arise with growth, and the flexibility via configuration values to handle most use cases.
That said, the heuristics powering the query planner can cause Postgres to make decisions that flip performance on its head. In this production issue we saw a normally fast query degrade in a spectacular fashion, and it was only after peeling back a few layers that we began to understand why it happened.
We've covered some ways you can configure Postgres to adapt beyond the size of the average database, and explained some trade-offs involved. We choose to increase our statistics target to provide better information to the query planner, but we're also pushing for more careful thought around data access patterns to reduce the complexity of each query, making it less likely for the planner to make a mistake.
Hopefully this is a useful case-study for those who want to learn more about the query planner. If you have feedback or questions about this article we'd love to hear from you @GoCardlessEng.
We use Statesman to model state machines in Postgres, where each transition is a new row. ↩
The payment_transitions_pkey
index contains references to
payment_transitions
tuples in order of payment_transitions.id
. This is why
the first result from scanning our transitions using this index is guaranteed
to have the minimum id
value. ↩
In our case, the assumption that payout_id
values are evenly distributed
with respect to the payment_transitions.id
is going to be terrible for us. ↩
Our real world example happens to be queries for recently created payout_id
values, and given the payment_transitions.id
is a monotonically increasing
sequence, we can expect our matching transitions to be right at the end of our
scan.
This is an example of how reasonable assumptions in theory can lead to pathological data access patterns in practice.
Scanning such a large amount of data is not only going to make the current query slow but will have a large performance impact on the rest of your database. Scans like these are likely to read old data that is not currently in our page cache, causing eviction of pages that are needed for other on-going queries. ↩
It's worth bearing this in mind when your database has strict performance requirements and depends on hot data being cached to meet them.
In practice, Postgres also adjusts for the known most common values and their histogram bounds, allowing the computation to take into account statistical outliers. In this example, we can safely ignore these histogram bounds because the most common values cover only a small percentage of the table. ↩
See src/backend/commands/analyze.c for an explanation of Postgres' two-phase sample strategy. The comment also mentions some flaws of this approach. ↩
See our post around lock timeouts to avoid causing disruption when taking locks in Postgres ↩
Table partitioning- either declarative or by inheritance- can help manage large, heterogenous datasets. When all the rows look similar there will be fewer candidate plans and less chance of a disasterous mis-step. ↩
View all news of Gocardless
Posted on February 21, 2019
We often get questions from our customers about payment timings.
In this week’s FAQ Friday, Customer Success Manager, Ellie, explains how Direct Debit payment timings work with GoCardless for both one-off payments and subscriptions.
We want to hear from you – get in touch and we’ll answer your questions in an upcoming video.