Podcast 140.Kristof Van Tomme on the role of API and developer portals in Generative AI systems

We talk to Kristof Van Tomme, CEO of Pronovix, about the role of API and developer portals in Generative AI systems. We discuss how organisations can prepare for AI.

We talk about what documentation infrastructure organisations should have in place to be able to use AI safely from an information perspective. We also discuss whether developer portals will become even more important.

Transcript

Speaker 1
This is the Cherryleaf podcast. Now the normal way that we have done our introductions in the past for our episodes has been asked the person we’re interviewing to simply introduce themselves, say who they are and what they do. So Kristof, I will keep the tradition going and ask you who you are and. What you do?
Speaker 2
So I’m Kristof Van Tomme. I’m the CEO of Pronovix, which is a consultancy that specialises in developer portals. It’s a bit complicated because we have a product and we do services. That’s you know if you want to find out more about that go and look at our website. I’m a bio engineer by education, but I’ve spent all of my career in software so there’s a lot of that ecosystem and platform thinking and that kind of stuff that’s sneaking in through the back door where I’m trying to apply some of the things that I remember and that I see in my day-to-day in my garden and around me in nature that I try to apply to, to software systems and the social technical systems that. We helped to create with complex complicated systems.
Or complex, I would say they’re definitely complex.
Speaker 1
Cherryleaf and Pronovix have worked together in the past, and you’ve wrote an article on the Pronovix’s blog and one of your colleagues suggested it might make a good conversation topic for the podcast about AI and APIs, which are probably the two hottest topics in documentation, so it might be good to start by to summarise what it was that that article was about.
Speaker 2
This is already a couple of months ago that we tried to organise this. Earlier we had a bit of a hiccup and then holidays. But already a couple of months ago. This was when the hype train was really starting to roll out of the hype station. It’s at unprecedented speeds. Let’s say you’ve never seen a trend take off with that kind of fervour in the software world, and I think what I wanted to do was to challenge a little bit the immediate instincts that a lot of people had was OK, so how can we use large language models to serve our documentation? So we’re going to replace our documentation site with large language model. There’s several things that that came into that article that I wrote based on a lot of thinking I’ve done about software systems and social technical systems and what’s the difference between an organic complex adaptive system and software system and what are they different? How are they different and what are they good at? And now, what do we need to do with this? And I wanted to challenge people. You don’t necessarily need to go and install your own large language model. There’s lower hanging fruits and there’s other things that this change will trigger in the ecosystem, in our wider society. That’s probably you can take a stance on and that you can get ahead of the game by doing some other things, so that was the intake for the article and I’ve hinted at some things, but I haven’t gone into the details yet
Speaker 1
As you say we’ve been talking about having this conversation for a for a while, since you’ve written an article, have your ideas changed, I think.
Speaker 2
Fundamentally they didn’t. Well, they have deepened and like last week at API days in London, I was actually talking to a couple of people about how to declare your interfaces. We started talking about maybe we should have a new standard for this and this new blog post that XXX, right, that is actually continuation on this. It’s also about AI readiness, but it’s like a little bit more tangible already. It is still the same idea about making a clear separation between this generative and authorative. That is still there. And yeah, just like getting used to computer systems now operating in different modes than what we’re used to.
Speaker 1
So we’ve also been on the same path in looking at this at APIs and thinking Oh my at AI generative AI and thinking Oh my God. This is going to change things and where do we fit in? And what we’ve done is has developed a training course, new learning course on this and you’ve mentioned something there that was in your article. And that was actually one of the questions I had, and that was about generative and authoritative information.So do you want to expand on that on how you see the difference between the two and why it’s important to distinguish between the two.
Speaker 2
Sort of. I think the best way that I’ve found to explain it is imagine you go into an office building. If it is still not all remote and it actually still has an office, you go to the front desk. And you talk to receptionists and you ask them, do you know If John is in? Or yes, I’d like to speak to somebody from support or I’d like to know a little bit more about your products and the receptionist will say, well, from what I know, I’ve seen John, I think I’ve seen John coming in, but I’m not sure who that is. But I think he’s in or yes. Let yeah, I have some vague ideas about what our product is and they’ll say something. But then they’ll say let me look it up for you and they’ll go into their system and they’ll use an authorative system that actually knows, like, yes, Sean has passed the gates. He hasn’t left the office, so he must be inside of the building. And then let me look it up. They’ll look up their phone number, they’ll give them a call and they’ll come back and say, Yes, John is in and he’s waiting for you in reception area. This is the difference between generative and authorative. The generative part is what humans are really good at. It’s like giving a first approximation kind of exploring a little bit of landscape, providing hooks that you can use to start doing research. But then the authoritative system actually goes and fetches the information and says yes, I am 100% sure this is true or this is not true. And the really interesting part is that we’re used to computer systems doing the authoritative bits. We’re not used to them doing the generative bits. This is new and this is where it gets us. completely confused and start believing that this is going to be magical or something.
But it is just like with humans. When you when you hire somebody and you to do a job, they need a lot of training to be able to do that job, and they’ll need a bunch of systems to support them, to actually be able to do that job with precision. And in a in a really qualitative way. Like if you if you just take a person from the streets and asked him to do something, it’s not going to work. And even if you train them, even people that have spent decades researching and developing like researchers, they will still rely on their computer systems to go and validate the things that they’re saying, because we just don’t remember or we don’t rely on our or we can’t rely on ourselves, on our memory, to be able to do all of these things we have grown to rely on our systems and our technical systems to support those in those functions.
So this is the challenge of hallucination where if large language models don’t know the answer they make it up.
Speaker 2
Yes. And I think the challenge of hallucination is a labelling problem because it when you go to receptionist and you ask them, what do you think they’ll, they’ll come up with an answer. But they’ll say I think that I’m not sure, actually. Let me look it up so that that is still something that we need to figure out. How do we make these generative systems give an idea about the reliability for their answering, but it goes beyond that hallucination problem. I think it’s about composite systems that are much more capable of doing things than any of the two systems on their own can do.
Speaker 1
So we had this initial thing where it everyone was thinking, Oh my God, there will be no need for like, Technical Authors, Technical writers. There’ll be no need for documentation pretty quickly. It’s become clear that you need authoritative source content to get good answers out of an API. How do you see the role of curated, well written documentation in this landscape where people are using generative AI systems to provide the right information.
Speaker 2
I can imagine content harvesting systems that are self-correcting and that are that are validating and so on. But I think that’s still quite a way ahead in the future. Also kind of like making sense of the whole or just really like selecting the good stuff from the bad stuff because all what generative AI is doing is basically generate a bunch of random stuff and then little by little getting better at saying good things purely because there’s a human that has been training that system. I wonder if in in a large language model context if you were not going to need even more humans to train those systems than in a pure, just declarative documentation model and I think the other aspect is things. Like, how do you make sure that the AI that your that your large language system has the right information that is up to date? Like you, I think you could imagine a system that is just feeling its way to reality and that’s when people start complaining or there must be something wrong. But that’s kind of a really terrible way of doing business, you know, like where you need to have people complain or give negative feedback to the system to for it to start seeing like ohh. Actually probably there’s something wrong in my model and then start adjusting weights to systems. So I think probably the experience is going to become much more advanced and much more interesting, but the amount of work to deliver that experience might be equivalent or even more than what we have today.
Speaker 1
So in the past few weeks and it really is just the past few weeks, there’s been a couple of approaches to this problem that have come out I. I don’t know if you’ve had a chance or come across them yet. One is RAG or retrieval augmented generation and the other is to interface with APIs for getting information.
Speaker 2
The second one I am familiar with.
Speaker 1
So the idea with submitted generation is that you use the large language model purely as a way of generating natural language answers, and what you have is you have a bucket of information, which is this the authoritative source.
Speaker 2
First one.
Speaker 1
So somebody asked a question, there’s a gateway. It looks for the source content that has the answer that somebody’s asked. It then prompts the source content into the prompt that goes to the large language model with the users question, and then lets the large language model find from the book, the source, where is the right answer that. And therefore the large language model doesn’t use any of its own data sets to find the answer, it only uses the information that it’s been given from the from the user documentation, and then it provides the answer. That’s one approach that that people are taking. You’ve got limitations with that in that. You’ve only got so much content you can put into, yes, a prompt, but that means it only gives an answer from an authoritative source, and then the other way is being discussions on instead of having a database doing it by APIs, which I think I said you may have come across.
Speaker 2
Yes, I think this is where the big dream for this technology is the idea of a general artificial intelligence and of an AI agent that interacts instead of you with the world, so it can go and search information, and you can if like pre-filter information based on what it knows about you to make certain predictions about the content that you probably would want to engage with and then help you to find your way to the right content. And I think one of the things I suspect is happening and would be really interesting to see it was.
Speaker 2
But I suspect that the next feature phones that’s the large mobile phone manufacturers are working on probably will be able to run olms like on the device or I don’t know. I imagine, right? If they’ve run looms on a Raspberry Pi, I think it was really terribly slow. I don’t know yet how much computation power you really need. I know that it’s horrendous and it’s a lot actually, but I would imagine that this layer, this translation layer will move into the edge. I think that there’s a lot of things that can go wrong with this when you centralised it. Imagine you give ChatGPT the right to do API calls against your bank account. That’s not a very appealing proposition, right? Imagine that an AI is starting to interact with the world through APIs. There’s all kinds of ethical problems with that. Who is sticking those actions is that ChatGPT, or is it you that took that action and now you could imagine. But if you own the device and if you own the model and it’s in your own device and so on, then it becomes very interesting because then it becomes almost an extension of you as a person. That is able to do certain things out there. In the world, instead of you, with your supervision and your control, that’s where we’re going towards. And APIs are the perfect solution for interacting with the world, because that’s what we’ve been using them for, for programmatic access. And now we have something that’s becoming more and more human like that we can use as a translation filter between us and that programmatic world. So there’s a lot of fascinating stuff ahead of us.
Speaker 1
Because I’ve seen there’s some YouTube videos of people have installing large language locally on their desktop and then putting all of their information in so they can just ask a question and it’ll retrieve answer it from the word files and all of the different things, maybe well, mainly PDFs rather than Word files from that and the processing requirements and the size of these databases. I hadn’t even considered the potential for it being on a smartphone, but the banking thing is interesting because one of the risks at the moment with large language models is what’s called prompt injection attacks, where you can do malicious coding and there is at the moment, no defence against us. So if you are giving an AI system access to behind the PIN banking, then there is a risk that somebody could inject code via a prompt in and get into a banking system and potentially raid one or more persons bank accounts.
Speaker 2
But like the real value of these systems, so if this is what we’re going for this layer between humans and machines. Then we will need to make sure that the human that owns the bots. It’s improving. What’s happening in their name.
Speaker 1
To talk and we’ve been touching on this now really of AI agents to talk a bit more about what you’re voting or your current thinking. All of that aspect.
Speaker 2
Reform APIs and developer portals, yeah.
Speaker 1
And yeah, and perhaps even AI agents.
Speaker 2
So platform APIs. I think there’s two layers of APIs, that’s are, roughly speaking, and there’s always. They’re dangerous to make, like binary, or justifications are always hard. But roughly speaking, I think on the inside of organisations or organisations are working on platform APIs that are APIs that are enabling that are enabling people inside of the organisation to do complex things are and the benefits that they bring is in that they’re a bridge between doing things at scale. And scalable infrastructure that you might have created and doing innovation and what platform APIs do is or what any piece of a platform does is it makes it possible to do innovation in a scalable way so that you can really rapidly iterate and make changes and do new developments without incurring lots and lots of technical depth. Because you’ve done it in a way that you’re reusing once your building blocks capabilities that you’ve built in your organisation that you can build into those new business applications that you’re building.
Then there’s on the outside, you have what I call interface APIs. They’re APIs that are exposing some of those capabilities that you have on the inside to an outside world either as an API product that you’re selling or and this is where actually most of these things are, because most of APIs are not about monetization. There you have APIs that help organisations to build ecosystems and to interface with their ecosystem partners. And facilitating those interactions. And you can imagine it like our cells in our body where you have preserved hormone receptors from cell to cell to cell, but depending on the cell, it will do different things. And I think that’s when you’re interfacing up, so you have this on the platform level. But you also have it another layer on the individual level between organisations.
Speaker 1
You often well sometimes see that with white labelling of products as well, where somebody is selling something and it’s delivered by somebody else and it’s all controlled via the APIs. So if you order something on eBay with delivery, you can track where your parcel is, but it’s actually the Postal Service or whoever that’s actually delivering that that aspect. And there is, there’s.
Speaker 2
Some interesting stuff because you start mixing inside and outside and you might be like, as you said, like you might be branding something as a capability that you have in house, but actually it’s a bolt-on capability. But I think. It to some extent this sometimes can be mixing and sometimes you might have an internal capability that becomes something you’re selling also on the outs. Right. But at the same time, like the way that living organisms work is that they have a boundary where they decide is this harmful or is this going to help me? And if it’s harmful, then you close the gates. If it’s going to help you and you can pull energy out of it or it’s more information that helps you to become more adaptive and survive better in your environment. Then it opens the gates and it lets it come in. So some something like that.
Speaker 1
So say you’re an organisation that has those types of APIs within your organisation. In terms of where AI fits into that, is it in enabling people to use and apply those APIs without knowing how to code IE natural language, is it that you ask a question and then the AI knows which of all the APIs it needs to pick and it’s like a clever chaining device that. makes that makes it for you. Is it both? Is it one more than the other? How do you see the AI bits assisting where you have those APIs in place?
Speaker 2
I see it as the interpretation layer that translates between a human that is just saying something and then a machine that is more authoritative about what it’s doing. But it needs certain prompts. And like in in exploring like what are the prompts that. You need to be able to make the system work. And like right now, we have the whole prompt engineering thing where humans are learning a new language. It’s basically a new programming language that’s a lot less exact than what we’re used to. It’s a lot more permissive and vague it actually. It’s not even deterministic like this AI system. Like large language models. I think they’re not well, they’re on purpose, not deterministic. You give them the same prompt and you get something different every time and. And that’s what actually makes it more human or more relatable.
Speaker 1
It’s a linguistics driven programming language. In some ways better you know the English language or the grammar structures of English or another language, the better prompts you can write and the better results you get.
Speaker 2
What’s fascinating is that we’ve gone from in software engineering, it’s all about predictability. Like you don’t want a system that’s going to do random stuff. You want to make sure that like surprises typically are bugs and so. When and now we’ve developed this new technology where surprises are features. And where the ability to delight us with new answers and unexpected behaviour is actually the main feature of this technology. Which is fascinating because that that’s a whole different types of software engineering than what we’re used to. Yeah. It’s, uh, fascinating times. That’s it like that.
Speaker 1
Let me go back to something again. You mentioned about having large scale curated APIs ready for as and when AI systems come in. To get to that point, what are the challenges that organisations face from where they might be today to have that quantity, that robustness of API’s they’re ready to then apply AI.
Speaker 2
I think right now this is about what is the interaction surface you have as an organisation. And I think right now that interaction, sure, first of all, that interaction surface is structural in nature, meaning that I’ll explain, make it more concrete how if somebody wants to sell a product to your company, how do they find out to whom they should sell it? How can they start that interaction? A lot of large enterprises have created vendor systems that will guide you through set of steps that you have to go through to be able to get approval and so on and so on. So there’s some of that. But then there’s also the initial request for going through that process. How does that work? In practice today, that’s probably the interests of individual people. That you need to somehow trigger. And probably for sales that will always be a little bit like that, although that you start seeing now, I don’t know how it’s for you, but for me a LinkedIn more and more I start seeing automatically generated interactions same in e-mail, it’s actually not going as fast as they thought it would have gone. So it still is somewhat manageable, but this barrage of inputs and attention grabbing and there’s a tsunami coming and. To be able to prepare for that tsunami. I think that we need to become deliberate about what interfaces we’re what, what ports are open in our organisation and how we allow people to interact with us. And you see this already happening like in support. Where you see some really terrible examples of that in support where you know some organisations don’t even have support. I’m sure you have had this experience where you have a problem and you start looking and you have to like phone into a phone number that’s a paying number and then they put you on hold for half an hour like basically they’re inflicting a massive amount of pain. So this is like one of the terrible examples another one is the chatbots. I had this one. I ordered a package online and this was the second time that they were not able to deliver to my address because the person who was supposed to do delivery said that the address did not exist. And so when I try to like explain, hey probably maybe there’s a delivery guy that you have here in our neighbourhoods, that’s not doing their job or something. But like this address really exists. So you know please fix it
But I ended up talking to a chatbot and we went like three times around where I’m basically starting from scratch again explaining the same thing over.
And. Yeah, this is conversational design that’s where they’re trying to solve everything with a machine and it doesn’t work because you need a human to catch the slack and to be grease in the machine to make sure that all the problems do get solved and people are not getting super frustrated. Well, what I’m getting to is these are anti examples. Hopefully we’ll do better in the future, hopefully we’ll have like, supports and this kind of interaction interfaces that are where there’s a good fall back and that actually detects when the fall back is necessary or when they’re actually, people can step in. But I can’t imagine that to be able to deal with the barrage of messages that is coming that everybody will have to weapon up. And that’s or arm up that you it’s basically it’s an arms race that’s to be able to deal with the AI worlds and the barrage of messages that we’re going to get that you’ll have to have your own AI’s because otherwise you just won’t be able to get anything done anymore because there’s just so much information flooding you all over the place
Speaker 1
Well, the big developments at the moment within AI. I think they call headless videos where the ide is that you can have an avatar present on a video. You can write. You can get AI to generate the text of what the Avatar is going to say. So you can have a sausage machine that’s generating promotional marketing videos all the time and then put them onto places like LinkedIn, Twitter, and YouTube. And you can have people in low cost countries just generating all of this compound. You’re saying that it’s going to be up to us to filter that I was hoping it would be done by LinkedIn and YouTube and so on that they will read all this stuff out, but you could well be right that it gets past them. It gets sent direct by e-mail and by instant messaging. And we have to deal with a lot of that content.
Speaker 2
I think it. Is how do you know if your message is legit or not? Even a generated message might be legitimate, but it really depends on the context and a platform like LinkedIn. They have some of your context of your personal context, but primarily filtering is being done on. On a platform context level and I think. There will be more and more content that slips through and then having something that knows you through and through. And that is allowed to know you through and through because that’s the other aspect. I would hope but we’ll see because it might be another centralised nightmare. But I would hope that this could we a new generation of the open web, where or our smart devices are becoming. You know the filter for things, but I’m not sure because there’s an equally a big challenge that’s like one of the big cloud providers is going to be sitting in as a spider in the spider web and knowing you better than your phone and doing this filtering for you, who knows? I don’t know if people will allow it. I don’t know if governments will allow it. It’s going to be an interesting question.
The lazy approach is to let software do the filtering for you, which then means that they control what you are seeing, which is the, well, it’s kind of the Faustian bargain. You know, you get it for free, but they know everything about you.
Speaker 2
In the world I have my well, most of my career I’ve spent in the Drupal community where it’s all about the open web and like, how can we make sure that people can own the instruments of content creation so that the small business does not have to depend on a Facebook page to be able to be in business. I think this could either strengthen the open web or it can weaken it further if the investments are too big, although that there’s some really interesting things now with smaller like LLMS that are still quite powerful and then if you do. What, like there’s two patterns we talked about where you don’t rely on the LLM to be sufficiently trained to be able to provide for the authoritative content. Then maybe we can deal with slightly more stupid models that are using some of those other sources to provide the more intelligent answers. Yeah, it’s going to be very fast. It’s going to. Be fascinating to see how it all works out.
Speaker 1
So have you come across any organisations that are doing this today or on the right direction that people should? Keep their eyes needs focused on.
Speaker 2
So I had a conversation last week with a developer at PayPal and he said that they just published a graph of all their APIs. To so that’s that will need to go and confirm that I can share this, but I’m pretty sure that I can share it, but he said that basically what they wanted to do was they wanted to enable AI to consume APIs. And for that reason, they had to publish their graph and that’s what he did. And he said, and there’s a couple of other companies that have done this. GitHub has one of these. I think Microsoft has one of these. So Yeah, the this publishing your interfaces I think is what the step is. There might not be a developer portal. That might be something a bit more lightweight but this this is I think a really,really fascinating area. And there’s already some initial companies that are doing this.
Speaker 1
And for companies that want to be ready for AI. What advice would you? Fired in terms of steps that they can take now to prepare. For that that wonderful day.
Speaker 2
So I think. First of all, start working on your APIs. And it’s a little bit tricky, right, because if you’re really small company then well, well, how does that work? But I think it is about becoming intentional about your interfaces. It’s like how do you allow people to interact with you? What ports are you going to leave open and what ports are going to close down? And how do you do that in a way that it’s actually enables innovation and enables success rather than filter out opportunities and closes you down from opportunities? And so if you’re really a large organisation and you already have APIs, then you really need to start working on. OK, how are you going to publish that and how are you going to make it accessible for potentially for LMS or for some next generation thing that is going to come after this. How do you communicate about those APIs and API design? And things like that.
Speaker 1
How do you address the sceptics? That might, I mean, with the within every organisation, often as the battle of inertia, the option of doing nothing. How would you address concerns people say again, that AI swipe or that the APIs have not necessarily be all and end all the benefits or drivers?
Speaker 2
So from a certain size of organisation, having platform APIs or having a platform like having an idea about what is your platform that allows you to be adaptive and reactive at scale. I think it’s. It’s inevitable and it’s essential, and where you’ll probably see. And one of the best places to start building your platform is probably thinking about what capabilities you can abstract behind an API from a certain scale. Like if you’re really small business, maybe you don’t need this, or maybe you can buy some of these capabilities from organisations and even if LLMs are going to fizzle, which is unlikely, I would say, but it’s still as possible. That’s regulation or whatever throws a wrench in this. In this this story or that’s some of the promises are not field.
But even then, building a platform and being deliberate about your interfaces with the world is going to pay off no matter what you do, because it will help you to be to scale, to be at the same time be scalable and resilient. Now this comes from the three economies model from JetBlue. That I’m sure I’m not doing full honour to because it’s probably better to go to the source, but he talks about the three economies under which any team can work.
So there’s economy of scale, which is a team that is trying to reduce cost by doing the same thing over and over again and reducing variability.
There’s an economy of differentiation, which is like a teams that run under economic different. Are creating new variety so that you can capture more value from customers and you can create better products that address better customer needs. So you can charge more money for it
And then he talks about the third economy which is an economy of scope which is this platform economy which is about creating things that become better. Reuse. And those are things like APIs, because APIs in truth are just designs, they’re an API is a contract, it’s not an API itself, it’s not the server that is running the API. The API itself is purely the contract.
So you can imagine what probably will happen is that we’re going to see a lot more standardisation in APIs because right now we’re in the era of API as a product. Which means that companies create their own products, their own API products that they have to run like a product, like a service. And you know, support or whatever.
But I think that we’re moving toward the world with APIs as utility where standards API designs are reused across the industry and everybody is using the same API designs. And as we started already seeing some of this happening like in the banking sector, in the telco sector, some of the standardisation is happening.
And that’s, you could say you could take a certain wait approach where just wait until other people have done the standardisation work because it takes quite some effort. At the same time the internal transformation you need to be able to work through APIs and to leverage APIs. That’s not simple and I think it’s high time to start working. Especially in larger organisations, to start working on the capabilities to be able to take advantage of these technologies.
Speaker 1
Yeah, there’s some consistencies now with things like Know Your Customer and Identity Management across banking and telecoms where they generally follow a standard or they are very similar between different APIs.
Speaker 2
Right now, even the open, open banking, even the PSD 2 regulation that was implemented in the banking scene. I don’t. Well, there’s exceptions, but almost all banks have implemented their own version. They’ve got different APIs. It’s a mess. But I think because today, having an API, having a good API is still a differentiator, but as these things will standardise and become more and more just table stakes. I think that a lot of a big part of the API surface will become more standardised and it will become more like a utility rather than a product. Where, Yeah, you can, you can just. say OK, I want ,I want a payment. And I don’t care what service I just want this payment done. I don’t. Well, I want it done well, but I don’t care how it works.
Speaker 1
Is there any question that I haven’t asked that I think I should have asked? Who? That’s an interesting question covered a lot.
Speaker 2
Yeah, nothing pops up right away. There’s a lot more surface to cover, like the platform thinking ecosystem thinking, but I think what is relevant to this conversation to this topic has been addressed, I would say.
Speaker 1
But your so your article on the Pronovix website that’s on the blog and it was let’s have like three ideas on AI readiness. The role of APIs and developer portals and generative AI systems that people want to know more. I guess that would be the starting point would be to look at that post.
Speaker 2
Yes, there is a sister post that’s brewing about how can we create a common standard for declaring interfaces. So that well, basically organisations can say these are all our interfaces or these are the interfaces we want to share and then make those from then on not only available for humans, but also to robots.
Because if you think about it, how do people find out that you have a banking app so that you’re banking with a certain bank? How do you find out that there is an app most likely somewhere in the footer there’s a link to the Google Play Store or to the iTunes well. The Apple store, yeah.
But what about more niche applications? How do you find out that there’s an integration with your favourite billing software? Probably you have to go to your billing software and hope that they have a marketplace and find your bank in the billing software. Ace is kind of crazy, right?
So we have sprinkled bread crumbs are over our digital presences that people need to go and find and be lucky that they find to figure out that there is an interface that you can use to do a certain job, which is insane. So I think thinking about how can we change that so that we can become more deliberate in declaring what interfaces we accept interactions through and then maybe you can find the back door still, but that that becomes it becomes more clear.
What are the sanctioned channels like the shipping routes? For information where it’s safe to go, because here you’re going to go really fast and you’re not going to bump into any sandbanks. I think what we need, what we need to work on and that that’s the article I’m currently thinking about.
As I said, like I had this conversation with Swapnil Sapar. And a couple of other people at API days in in London like Jean from Sanofi and Stanek Nemec from SuperFace. Like there’s a couple of people that that we have conversation about like. Hey, wouldn’t it be cool if we if we would be able to declare our interfaces. And you know, just like you have the robot.txt or something like that to say this is how you get support here. Or and here’s our app for this capability and there’s our app for that capability. That’s the integration for this system and for the other stuff. Here’s the API that’s like that.
Speaker 2
And what about just regular customers? How do they find out what’s what your interfaces are? I can even imagine that the European Union is going to regulate this and say every single company needs to have one of these files which at least support at least this, and at least that, and for support you need to have this kind of fall back in case your machine doesn’t do it, and then you have to go to a human.
I can imagine this is going to be regulated so that these kind of games where companies go and hide behind phone paywalls and whatever other really, really toxic behaviour that that is no longer possible. So that kind of thinking. This is what I’m working on. And it’s connected to something I’ve been chewing on for a long time, which is the interface manifesto, which is a complexity philosophical perspective on how interfaces help us to be more adaptive, but that’s really half-baked and it needs at least a couple of months, if not a couple of years of seasoning before I can come out with that, but yeah. But this is this is what it’s inspired by.
Speaker 1
That’s yes, the that looks like it will be an interesting article. So that will be coming out shortly I guess. And if people want to contact you, what’s the best way? LinkedIn or?
Speaker 2
Pretty much it used to be. It used to be Twitter, but yeah, trash fire. So LinkedIn is currently the best way to connect and to get in touch. There’s also our website of course, and if you have any inquiry about developer portals, then my colleagues or me who are very happy to answer your contact request. If you follow the proper interfaces.
Yes, this is the same thing. Yes, same challenge.
Speaker 1
So we’ve done a lot tonight calls in this conversation. So Kristof, thank you for your time.
Speaker 2
Thank you, Ellis. Thank you very, very much for reaching out and for giving me this opportunity to have this conversation. And it anyway was already way too long since we last talked, so it was good to catch up. And yeah, thank you for doing the podcast and for having me as your guest.
Speaker 1
You’re welcome.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.