Appian CEO on AI players: "The last thing they want is having to care about other people's rights"

Matt Calkins, co-founder and CEO of Nasdaq-listed Appian, tells MT about his fears for Big Tech’s dominance of AI, his indifference to the share price and why board games are his secret weapon.

by Kate Magee
Last Updated: 08 Jan 2024

Should we change the definition of what it means to be a human being? What are the disadvantages of freezing yourself to live in the future? Why are board games a CEO’s secret weapon?

It wasn’t quite the conversation MT was expecting when we showed up on a rainy day in London to meet Matt Calkins, the billionaire (according to Forbes) founder and CEO of Appian. 

Appian helps businesses create apps and software using very little code. Calkins founded the business with three friends in 1999, when he was 26 years old. Unusually for a tech company, 24 years later, they are all still leading the business. 

Early clients included government agencies. In 2001 it developed Army Knowledge Online, which Wired magazine said was ‘the world’s largest intranet’. It has worked with most of the Department of Defence - the US Navy, the US Marine Corps and the US Air Force. In 2005, it expanded into the corporate world, with current clients including Aviva, Deloitte, GSK, John Lewis, KPMG, Munich Re, Natwest, Santander, and Serco. 

The business went public on the Nasdaq in 2017. Like many other tech firms, it saw a dizzying rise and fall over the course of the pandemic. (The share price was around $45 at the start of 2020, it peaked at $226.68 in 2021 and is holding around $34 so far this year). Calkins claims not to pay attention to the short-term fluctuations in the share price. 

Appian is also involved in one of the largest corporate espionage cases in US history. In May 2022, a judge ordered rival Pegasystems to pay Appian $2.04 billion in damages after Pega was found guilty of stealing trade secrets. The legal battle is ongoing - Pega is appealing the verdict and Appian has filed a rebuttal. 

As a tech CEO, Calkins has some strong views about the development of AI - namely that the Big Tech firms are trying to distract and scare people by talking about the technology’s existential threat, rather than more practical debates around copyright and privacy issues. 

He’s also a board game aficionado, regularly competing in the Board Games World Championships, playing with his friends and has written four of his own games. 

Below are the edited highlights of our conversation. 

MT: You think the wrong people have been called upon to legislate AI. Why?

MC: All the areas mentioned in the White House and Bletchley Park statements on AI are those of particular concern to Big Tech. What we're not talking about are the issues of concern to people who create content, or anyone who has data privacy issues. 

Firstly, these statements are largely around existential threats, which allow Big Tech to say that we need some sort of anti-proliferation regime. This helps them cement a monopoly they have not earned.

Secondly, it allows them to train almost at will on privately-owned data provided they anonymize it. I want more privacy than that. I want the things that I create to have more defence than that. 

AI isn't a creator, it's a reshuffler of data. It's my data that’s reshuffled, I should own a piece of that game.

What should change?

We need to assert more data rights. That's the number one thing we should be nailing down at the beginning of the AI era, yet it's the last thing anyone seems to be talking about.

The industry wants to talk about Terminator scenarios, not other issues, because they know that they're violating copyright left and right. The last thing they want is a speedbump, like having to care about other people's rights. So they would rather scare us with existential threat. 

You argue that companies need to set up their own private AI systems rather than using publicly-available large language models. Why?

No company wants to share their data over the internet, upload their databases and train an AI model that they don't own. And yet when we talk about how we're going to use AI and ChatGPT, that’s the way to do it, we can’t get any value otherwise. So we have a conundrum. 

How would a private AI system work?

One way would be to take an open source AI, bring it inside the firewall, train it, cultivate it, and use it internally. That's possible but difficult. Maintaining a customised algorithm is a really expensive thing. 

Other organisations will need to do something that preserves their privacy, but still leverages the advancements of the industry so that they are not left on their own to figure it out. 

The way to do that is to accompany every question with the data that answers the question. Then you don't have to train the AI algorithm in advance - it does a pretty good job of giving you the answer, provided you hone it in on the context. That’s called Retrieval Augmented Generation and it allows for a more private version of AI. 

How will AI change the business landscape?

So far, AI is seen as revolutionary because of the way it generates content. What we're beginning to realise is that AI is just as revolutionary for the way it accesses data. We haven't had the conversation we need to about AI as a data access and synthesis tool. 

AI is going to be a major boost to the value of information. Inside the typical corporation most data is wasted, because it isn't brought to bear the moment a decision needs to be made. There may be 1,000 systems, but when a customer is on the phone, generally a company is only using the data in one system.

Centralising data has been too unrealistic - it doesn’t work. But AI is the answer - it can synthesise all that information at speed.

What will happen in AI this year?

The year of talking is over. There’s been a lot of hyperbole, a lot of giant predictions - it's all nonsense. No way is an AI going to be as smart as a human this decade, or write Shakespeare in seven years. AI is going to act like a teenager because the internet is written by teenagers.

AI is going to plateau because we've gotten all we can out of feeding it. Now the hard work begins and we need to figure out how to get practical value. It’s not how to beat the Turing test, it’s how to do a specific thing with higher productivity than we've done before.

We're not making AI our new boss or counsellor for life. We're just going to make it easier to parse customer communications and gain information from incoming emails. We're going to make it easier to appraise damage to the shell of a car after an accident. We’re going to evaluate the lifetime value of the customer while they're on the phone and get advice as to whether we should offer them a discount. 

We're going find that small teams with innovative ideas applied specifically to the problem at hand can lead in AI. The measurement of the year will be productivity.

Who will be the big winners of AI?

AI is not a top-down technology. It's not a 'winner takes all' and it's not whoever spends most wins. Instead AI is a game to be won in pieces and by innovators.

Consider ChatGPT. It wasn't Google that won the race. It was a fairly small AI-centric non-profit that first broke out. Small, focused, innovative firms will be the AI winners. 

Has AI already gone too far for us to claw back control? 

We can't stop it. As a species, we don't know how to say no to new technology. That may hurt us at some point, but I don't think that AI is an apocalyptic threat in the near term. I'm far more worried about what people will do to other people using AI, that's a real danger. 

We're also dependent to a degree that we don’t even acknowledge or realise right now on technology. Modern human beings are quasi-digital. We're becoming bionic - we need the technology. 

The more we develop into a bionic tech-needful entity, the more we face a choice about whether to declare that part of the human definition. Do we want to say the definition of a person now includes a private database? Or the definition of a person now includes an inviolable AI instance, which belongs to them, and can't be questioned because it has Miranda rights [it can’t be forced to divulge an answer]. 

I think we face the question that humans are either going to be smaller and more dependent on something controlled centrally, which I think is frightening. Or we need to augment ourselves, declaring that humans include some of the necessary technology that we've become dependent on. 

At some point within the next decade, we have to decide whether we're going to expand or contract the definition of human beings. 

That’s interesting. There’s been much debate about whether humans will eventually become digital beings. What do you make of the singularity theory - that this century there will come a point when machine and human intelligence will merge? That humans will effectively be ‘uploaded’ to a supercomputer?

I'm sceptical of that singularity. If you were uploaded into a database you wouldn’t be you. The hypothesis that you can load your brain into a database and live forever is based on a misunderstanding - you wouldn’t be able to feel or see. If you freeze yourself and wake up in 1,000 years, you wouldn’t be you. A lot of these theories are based on a fanciful interpretation of human consciousness permanence.

AI also presents a huge challenge to personal identity. How can people retain control over their image and prove they are real when deep fakes are so prevalent?

I think the new generation is just going to learn that you can't trust what you see. Past generations knew this. We’re going to regain our natural scepticism.

We’ve unfortunately degraded our trust in authorities to our general disadvantage, but that’s been contributed by both sides. AI is just going to make it worse. We're going to go from eroding trust to just collapsing it and people will be more balkanized than they used to be. 

It will intensify our culture's already notable valuation of authenticity. We really care whether something is real. Even today, the most popular thing on television is a live event by far. 

Our compulsion for authenticity is going to be intensified by our mistrust of anything that's being represented after the fact, because it can be manipulated. 

Speaking of fraud, your company is involved in a huge corporate espionage case in the US. What advice do you have for other CEOs to prevent becoming victims? 

We found out because of a whistleblower. A former employee of Pegasystems told us that Pega had conducted espionage. What they were doing is pretty hard to police. So I'm not sure I've got great advice for how to stop that level of espionage, other than if you see it, make the penalty high enough so that even if the perpetrator thinks they'll get caught only 10% of the time, they still won’t be willing to do it.

You went public in 2017. A year later, you told The Washington Post that you didn’t regularly check the stock price because “the short term fluctuations are not interesting. They can only distract.” Is that still your approach?

I'm not interested in the stock price today. In general, if I find out about it, it’s because someone else told me. I don't care about fluctuations, I care about the very long term. There's not a whole lot of importance in whatever the trading price is at any given moment. It’s certainly not what writes my strategy. 

Like many tech companies, your share price enjoyed a dizzying climb during the pandemic and then a notable drop. Did it change the way you ran the company?

I tried to change nothing about how I run the company. We did not celebrate when the stock price went up. And we did not anti-celebrate when it went down. We tried not to watch it and instead just keep executing. 

I was once at a company that did just the opposite. Every day there'd be a ticker and the CEO would talk about how much money they made or lost. It was really intolerable and it took people’s eyes off the ball. 

When do you expect to become profitable?

We’re going to cross the profitability line next year. We’ve said this to investors and we’re well within our bounds for this year's projection. 

Appian is the poster child for the Dulles Tech Corridor (an area in Washington DC). What advantages and disadvantages have you experienced from resisting the lure of the more established tech building ground of Silicon Valley?

It was possible to build a tech business outside of Silicon Valley, but you have to build the right business. 

You probably couldn't win the B2C market, because that is so much based on money up front. You would need a very mature funding community and Washington doesn't have that. 

There’s also the matter of gathering talent quickly when you strike the mother lode. We don't have a talent pool to draw on in an order of magnitude. So if you are a business that needs to spend, get big, and gather people quickly, you probably can't do that outside of Silicon Valley. 

But our industry encourages long-term building. An enterprise software company builds gradually, it builds based on customer experience and value delivered. It's not a marketing vehicle. 

We didn't run it based on spending first and making money later. We paid our own way. It was mostly bootstrapped. That kind of business can be built anywhere where you've got a reasonably good executive and university community. Washington absolutely has that. It’s an intelligent town with some funding available. 

Why did you decide to bootstrap the business for most of its existence?

I started Appian when I was 26 years old. I didn't think my lack of money was the thing that was holding me back, it was more lack of experience. I had a learning ladder to climb and I didn't want to have a tonne of money at the beginning as I probably wouldn't spend it perfectly. It was probably better to grow slowly and build my own abilities at the same time as we earned our place in the industry.

The somewhat slower path was more appropriate for us and for me. Also, I didn't want to dilute the firm for fear of losing its ability to self-author. I didn't want to follow trends, or rise and shrink with the market, or be dependent on money or the people who represent money. It's very important to me that the organisation has autonomy and it still does today.  

How do you think you have personally changed since you started the company with your three friends?

My three co-founders are still there. I think it has something to do with loyalty, which is really important to me, and a supportive and encouraging company culture. We don’t get into fights over money or share or authority. We are still friends.

But how have I changed? Oh very much. Every year I look back and think ‘how did I ever get away with being like that’? So I've been through a whole lot of those reinventions. I still feel like I am better now than I was last year. 

What's your biggest leadership lesson?

Be sure you're creating value. Really check on that. Follow your actions all the way to impact no matter where you are in the organisation. Challenge yourself. Use that as the foundation for everything that you grow and do. 

If you want to start a new programme, do it in the simplest, most direct way, make it work, establish great value and then replicate. 

You are a board games aficionado. You run your own game nights, you compete regularly at the World Boardgaming Championships and you’ve created four of your own board games. Why do you think board games make you a better CEO?

I’ve liked board games since I was just old enough to count. I like them because they're numeric; I probably learned to count because I wanted to play games. I love them because they're competitive, they can teach you things like geography if there's a map involved. I love it as a social experience. I love it for complexity. I like rules. I like logical intricacy. So all of that makes them terribly interesting to me. 

My favourite? I think it’s more fair if I don’t cite my own. I like relatively complex, business-oriented games or maybe geopolitical ones. My favourites include Acquire, Power Grid, Automobile. Ingenious. That's a good field.