Tuesday March 15, 2022
Note: This is auto-transcribed and may have errors or missing details. We encourage you to hear our recordings as well. Click Here
Alright, folks, let’s get this started. So I think all of you probably recognize my voice by now. But my name is Daniel Stevens, I’m the CEO of bird. Today, we’ve got some really great planned content for you, we’re going to be unveiling some of our first results for our investors scoring model. We are getting very close to deploying. And, you know, one of the aspects of, you know, any research process is, you know, findings and learnings, discoveries. And that’s really what we’re going to be sharing today. You know, in any case, when you’re the first to do something, you’re often in a position where you have to develop new primitive tools, new foundational tools. And a lot of what we’re going to be talking about today are those sorts of breakthroughs. So finding new ways to structure blockchain data, with the intent of feeding it to machine learning models. So given the fact that today’s content has been pre planned, unfortunately, we’re probably not going to get to many of the community questions. I’ve been submitted on Discord and telegram. The, you know, the outreach on that has been amazing, the questions that we’ve gotten in the last week or two really good questions, and there’s a very long list of them. So I’m very looking forward to getting into those questions. But like I said, I don’t think that we’re going to get to them today. So if anyone’s questions are in any way time sensitive, as always reach out to me reach out to the mods. Perhaps we can have a conversation offline or in chat, but for today, very excited to give the floor to our very own Lihan. He is our lead machine learning engineer, brilliant guy have known for years. He has been working in defense contracting for DARPA, the US defense, Advanced Research Projects Agency, I think is what the acronym stands for. They work on some really exciting stuff. So we’re so fortunate to have somebody of Lihan’s caliber here today. Fulbright fellow is studied in multiple universities around the world. And as I said, just an absolute brilliant guy and a pleasure to work with. We also have Ahmed on the phone today, um, it’s probably not going to be at the mic as much but he had a lot to do with this work. His handle here is Ahmed 64221. So as always guys, you know, reach out to us after the fact if you have any questions you know, if you love what you see, don’t be shy to give people like Lihan and Ahmed some kudos because what they’re doing here is is really a first of its kind. And so with that, because we do have quite a bit of content. Lihan I’m going to hand the mic over to you if you’re ready. Is anyone on the team can you respond? Just let me know that my mic is working here. Yeah, mics work. Then
okay if we have any
okay guys so I’ve just heard from Lihan he’s having some communication some technical issues here so just give me a second.
Okay ah there we go
hey sorry everybody we had a mic problem. There’s the reconnect disconnecting, reconnect worked troubleshooting 101
Yeah, so I’m gonna get started Yeah, please do the floor is yours actually sorry,
quick interruption. So for folks that haven’t used this platform before to see lehan slides, you’re gonna actually have to click into the the box that says watch stream. And you’ll definitely want to see the slides.
But that’s all I’ve got to say Lihan, so go ahead. Thank you, Daniel.
Yeah, hopefully everybody can see this. I’m Lihan. I was recently brought on board birds to accelerate its application development and research. Today, we’ll be looking at what’s we’re accomplishing technically. What we’ve done so far and what we’re looking to do, going forward, so I’m super excited to be here and hear from the community. Take a lot of fun and productive questions. So yeah, let’s let’s start
going to There we go.
Let’s get this sites in what bird brains to web three are a suite of analytics, and machine learning products and services. So if you think about the sleek web two experiences we’ve all come to enjoy. They have these Data Silo backends, where users on the platform bring a lot of value. But who gets to enjoy, you know, the insights from this data are big tech. And instead, in web three, if we think about a open data economy, where the data participants are, you know, enjoying part of that revenue, and enjoying depths with individualized experiences, that’s a much better future to be. So we want to build tools and services that increased demand for private users. And we want them to have a voice in the governance of these services and functionalities. So the product I’m going to go, deep dive on is this user, sorry, this launchpads wallet scoring application, where we’re going to look at a lot of wallets on the Binus smart chain and predict if there are going to be others for Project tokens that a launchpad will provide. Hope everybody can hear me Can someone confirm or deny?
Yep, we got you loud and clearly. Okay, perfect.
All right. So that’s a high level view, let’s go into this product or developing. Um, so in one sentence, what is the launch pad? What is this launch pad use case. So it’s an insurance application for identifying valuable investors. That’s represented on chain or cross chain. So how I think about it is this user interface on the rights where you can connect your wallets this DAP. And depending on if you’ve been if we think you’re a hustler or not, we’re going to offer you different rates and different benefits for this Launchpad project. So for example, the wallet on the right has, you know, he can, or she can huddle for days. So the purchase price of one token is 25 cents. Whereas on the left, it’s more expensive for this wallet on the left to purchase the same token. And we can look at lockup periods, we can look at purchase amounts. So we think virtually any project can at Asana, Launchpad can benefit from this insight. So we’re super excited to see what investors use this for. So our job as a technical team is to translate this application to an AI or machine learning task. So that gets even simpler, we just want to know if a wall is probability to huddle. So to get to that, to accomplish this task, we have three technical modules that after implementation, can answer this question. So at the most bottom layer, we have this data infrastructure module, where we want to listen to on chain activity, like the Binus marching and accumulate all the transactions live. So we can make inference decisions from from transaction activity. Now, it’s not enough to just have, you know, a billion transactions we want to associate these entries to individual wallets or even better blockchain entities. So this wallet feature module well encodes say, if you think about your wallets, features about you as represented by your wallets. From all the past transactions you’ve done, and then the most interesting part is we have some things to predict about in this case. How long are you going to hold this token? You know, if you’re allowed to buy it. So the inference module are a suite of models that well, you know, predict, predict something about you. So the chain of events, starts with data infrastructure that lets us get data from on chain to Wallet future position, which lets us compute statistics about this wallet. And then inference, which is are you going to huddle or not? That’s a high level view. I’m going to get into each module now. But yes, so So this is what we’re trying to do. Okay, so I kind of debated if I wanted to get, like, dive that deep. But what we’re looking at is an example wallet feature as Asian. So we’re looking at the output of this wallet features ation module. What are we looking at? So the address column on the left is a wallet hash. The project hash is a hash of a project contract. And all of the features or statistics we’re gonna see, belongs to that pair. So it’s so for example, average holding time is in number of binance, marching blocks, how long has this wallet held this project. So there’s a number of that’s our what’s called our target variable, which is something we’re trying to predict. And we have other features like receive block number min, so that’s the highlighted column. What that is, is we looked at all your receive transactions for this project. And we found the smallest block number among those transfers. So that’s the earliest block at which you receive this project token. Okay, so we have in, for example, receive block number Max, the next column is the latest block, where we saw you receive something from this project. And then we can go on, we can look at the total value you’ve received for this project token, the average per transaction value. And we have at least 200 of these features. So this, this table goes to the rights for at least 200 more columns. And we can always engineer more features like this. And in terms of the number of rows rights, we want to index eventually of binance, marching, so we already have a couple million of these wallet, Project pairs. But we really want all of them. And because they’re publicly available, we’re that should not be a problem. So there’s quite a number of challenges, but also future directions that we are really excited about. It, I’ll get into that in later sections. But this is just for everyone to get a feel of what it is that we’re feeding to machine learning models. So for example, a neural network, but in just you know, these rows by the millions, and then each observation, or each row here has, you know, a couple 100 features that the model is considering at the same time. Okay. So, yeah, at a high level, where we are currently is we have this early end to end pipeline where we start with finance transactions, and we can end up all the way at a this is what we this is how long we think a wallet will pose this token for. So the metrics at which we measure our performance is with recall and precision. I’ll explain what what they are what I’m talking about regarding model optimism, and also what is 65 to 90% recall. And at the same time of developing this end to end pipeline, what I think we’re really proud of is we’re doing this in a way that’s extensible, so we can onboard a huge number of people, new developers, or community contributions That’s something we’re excited about. It’s also modular. So these technical modules we’re looking at are reusable, and are plug and play for future products. But also, we’re testing it more and more rigorously. So we see all kinds of weird stuff happening on binance. Marching. I won’t get into that this slide, but in our internal meetings, we’re finding like, tokens that appear in someone’s wallet. And we have no idea how it gets there, like we looked at it on the Binus. Explorer. Yeah, all kinds of interesting stuff. So after this summary, I’m going to get more visual and show some diagrams. I feel like that’s how things usually go. So this is the blue circles, serve the wallet feature zation. block here. So on the bottom left, is the Binus. Marching, that’s how I think about it visually, right, we get blocks in sequence every three seconds, and it’s accumulated into this cylinder and called transaction database. So when, when we want to feature eyes, at the water level, we run this main main nodes. And it starts up a lot of workers, who then goes to the transaction database, fetch a bunch of transactions per address, and builds an intermediate database. That is this wallet project pair. All this is what we traditionally think of as ETL, standing for extract transform loads. It’s quite common in all kinds of data applications. And, yeah, we have a lot of, we have really good data engineering expertise on the team. So we can continue to do this for Aetherium. You can build low latency, nodes listeners, for these individual chains. So yeah, getting this part right is super important. And we continue to work on that. Now, for data scientists and machine learning researchers, that’s the right part. So these green circles are in this what’s called a controller view model framework. So what a data scientist will do is take that cylinder from the blockchain feature, position step, and instantiate a controller, she would configure this controller with what’s called a Yamo file. And the controller in sequence would instantiate a loader for loading the data in batches, a model to predict something about the incoming data in a view object, which output the results. And, you know, help the practitioner to diagnose what’s going on. So yeah, okay, so the nice part about the right side that we have is, we if you have different ideas about what the model should be, and how it should be configured, multiple data scientists can explore different models on their own, while having the same controller motor and view. So this lets us explore more more hypotheses about what’s working, or what could work. And likewise, for the loader, if we want to process the data differently. Different scientists can parallelize that process as well. That’s kind of the whole point of Yeah, why do we have it? Why we have this layout
for inference in translation. Okay, so with that
yeah, and here are some low level tasks we’ve been doing the past few weeks. So the kinds of issues we look at are like missing wallets in our database, we find wallets with like really abnormal behavior, like you could send out tokens without receiving any. We like check that with the Binus explorer, in in fact, that is what’s happening. So quite a lot of mysteries. And to handle that. If you look at this Three to four audited wallets item. Essentially what we’re doing is hand audits. The weird wallets, hand audits, how we’re computing these features. So it’s been quite a, quite an adventure. And then on the inference side, so I’ve color coded green corresponding to the inference on the right hand side here. We’ve established this basic framework. We’ve been experimenting a lot with all kinds of machine learning techniques. And I want to present some early inference performance. But with the caveat that I think it’s good, but it will be better. So. Yeah, I’m excited. Okay, now we’re getting into the analytics of what’s happening on BSE, on the left, is a distribution of different wallets holding times in number of days. So what’s jarring to me on left is within one day, that’s that that’s how long like most people hold things for is one day, that or it’s bots, who are doing some pretty particular trading. Yeah, it’s interesting. Most people want to test something out like a defy contract, or just hold some tokens for one day for some specific reason, and then leave that position by transferring it out of their wallets. So if we look at, you know, all the wallets that hold past one day, you get something like what’s on there, right? So we see like, a steep drop off after 25 days, that’s the first red dotted line that’s been plotted. So, you know, we’re not exactly Vanguard valued clients, right, like 100 days is a lot for us. So if you can make it past 25 days, that’s pretty good already. So we call this class this 25 to 100, day class, medium term investors. And then if you leave your position before 95 days, your short term, investor, and then if you go past 100 days, you’re hodler. Which, which is pretty good, because the bonus Mart chain is younger than Aetherium. It’s been around for I think, two years at this point, it really the projects on by Nance are, you know, even shorter, that are even younger than two years. So that’s kind of the task we are left with, we want to find who’s on the tail of the rights distribution, who are the hollers that will hold this project has the 100 days. And, sorry, there are some people on the there are some extreme outliers who can go past 250 days to 300 days. We really can’t see them on this plot. But the fact that the plot extends that far means these entries exist. So just another comment there. In one more thing, I think is if you have multiple wallets, we will like to link those wallets to the same person. And also, if you’re staking some tokens, we think of that as hard is holding. It’s holding the tokens still. So there are all kinds of nuanced questions to ask about what’s hot, who is holding and who’s not.
So metrics, let’s talk about how our neural network is doing. In first, how do we talk about it even? So, if a classifier is often correct, when it says it’s a hodler that means this model has high precision. Okay. But a different question is if you have 100 hollers in the population, can this classifier find all 100 of them? If it can find a huge percentage, it has high recall. So an implication is, if you have an optimistic classifier, it will try to guess hodlers when they’re not. But because it’s so optimistic, it will catch more hollers by guessing them so frequently. Okay, so an optimistic classifier has high recall, but low precision. So, right? Inversely, if you have a pessimistic predictor, it will say, Oh, we, we, you know, we really find hollers, you’re not a holler, we don’t, we don’t care about you. In that case, the pessimistic model has high precision below recall. So there’s quite a trade off quite a bit of trade off depending on how conservative your model is, when it classify things. So coming back to this distribution of holding times hodlers are quite rare. If we think about the mass under each partition of this distribution, right holders are more rare to find than the other two classes. In fact, about like 60% of all wallets are these short term except that the short term behavior Okay, so yeah, let’s let’s get going. Okay, so this is how I think about what metrics to care about. If this Launchpad is working with a client projects with large budgets, the goal of that project is to find as many hollers as possible while accepting a portion of the what we think are hollers will dump this token, you know, earlier than we thought, so before 100 days. In coming back to recall versus position, we are looking at an optimistic classifier that can get us all the hollers that’s more important than Miss labeling hollers when they’re like short term investors. So what does this mean? The newer network we tried can identify as much as 90% of the hollers inteset. So that’s, I think that’s pretty good. I think it comes to the cost, though. Because that means this is an optimistic neural network, who like to give the benefit of a doubt, to the medium and short term investors. So we can identify the majority of hollers while accepting a large portion of these predicted hollers will actually leave this project earlier than we thought. So we have a clear path to identify the hollers while being precise about it, while having a high precision about it. That that’s the performance goal for the next leg of our work in this application. Yeah, and you know, to keep things short and sweet, we’re looking at more functionalities to build into this application. Like what I was saying earlier, if you have multiple wallets, we want to associate that to the same investor. On it’s another great data scientist on our team, right? Said hey, if we had to similar wallets, can we infer the behavior of one the holding time of one with another because they behaved so similarly in the past. We can sort of graft information from between similar investors. And likewise, right if you love holding gaming projects, and we have a gaming project at the launchpad. We want to explore project level similarity and use that information to granular granular rise our prediction for your holding time specific to this project type. And then, yeah, there’s more stuff, we can, instead of collapsing all the transactions of this wallet into one row that we saw in a CSV, you can input all the transactions of that wallet. So the model sees a time series of these transactions and more precise about what happened in time. So that’s something I’m really excited about. And, yeah, there’s more defi stuff we can make, you know, we can build into our inference procedure. So, yeah, challenges. What’s hard going forward? We want a stronger inference signal. Meaning what, you know, from a first principles perspective, what is strong evidence that a wallet will be? Well huddle in something, you know, to outsource to the community, do you have ideas? Do you have tail tail? Evidence that you use to tell if a wall to wall puddle or not? Yeah, super furious. I, you know, if you guys gave us great insights we are. We’re excited about that. That’d be awesome. That’d be that’d be really cool. And then there’s all kinds of things particular to different blockchains different layer ones. So things like you know, contract costs. Trading, staking and liquidity providers is holding in this mysterious movement of tokens are things we want to get a better handle on. So perhaps like a blockchain engineer or blockchain experts can give us more insights on that. You Yeah, so that’s kind of the end of it. Should we do q&a?
Oh, did I lose? Everyone? You’re still good. Yep, I can still hear you, Lihan. Great.
So so that’s all the content I have today.
Alright, guys, let’s, uh, I don’t know, Jack, maybe you can chime in here. I’m not sure what the right sort of protocol here is to invite questions from the community. Alright, so don’t hear Jack here. folks in the community, I believe that if you have questions, you can simply unmute. And we should be able to hear you. So, Lihan, I’m not sure if people don’t have questions, or if there’s just some issues with folks being able to speak. But if you could maybe elaborate a little bit about the future. And I don’t mean specifically within this particular application. But maybe you can talk a little bit about how the work we’re doing on an investor score could set us up to do work in other sectors. So for example, gaming, or finance or lending, that kind of thing.
Thanks. Thanks, Daniel.
I think we are building this framework in a long term perspective. So we’re thinking of
very good foundational engineering that
was work can be carried over to all kinds of other tasks, inference applications related to web three. So you mentioned gaming. Something we can extend this functionality to our NFTs. Can we trace You know, can we predict something about the movement of NFT’s across wallets. And the kind of data and infrastructure we have in place, and also service this kind of new applications. So we definitely have a head start. And we’re building research insights all the time about how we should predict and analyze on chain activities. Excellent, thanks,
folks in the community, any questions? Jack,
if our back and you can kind of give us some guidance on how to, you know, encourage questions or get folks to unmute, or whatever the hang up might be. Let’s do that. Dan, I adjusted the setting. So I’m not sure if it worked or not. But people should be able to unmute themselves now. But I do have a question for Lihan.
And my question is, is whether
all the findings that we work on now for the launch pad investor score, will those be able to be layered on top of the next products that we’re building? Or if each of these products is siloed?
So the short answer is we are building all the other Lego parts that went into this framework will be reusable, because they’re modular. And we we can, you know, maybe we’ll build new Lego pieces, but the bulk of the framework is just about two new use cases.
So this, what we’re talking about right now underpins whatever product development that we’re working on.
Yes. Awesome. Thank you, then.
Thank you. And, you know, we look at right now we have a finance, finance live node set up, we can also, we’re looking at the Aetherium live nodes. So after that’s up, it’s a matter of accumulating the Aetherium transactions and featuring those at the water level. And then we want to do more work to tailor that inference to Aetherium. But the code that’s there and the logic of hey, instant, you know, build this model now and configure it this model can be used right away on the Aetherium. Chain.
Leon, I hope, I hope I’m not interrupting you. But it looks like there is a community member that has a question and he’s got some trouble with his mic. He’s asking about asking specifically about scam tokens. But I think it’s perhaps equally interesting to Zoom out a bit further and just sort of talk about, you know, bad actors and Mal intent in general, because I think it’s an area of crypto, that’s obviously a hot topic, right. X or almost on a weekly basis. And the reasons why we’re looking at an Oracle network for the delivery mechanism is to, you know, prevent any kind of, you know, exploit factors and bad actors. Maybe we can talk a little bit about how you see this potentially affecting the ability to identify bad behavior on the chain and potentially scam tokens as the as the question has been posed.
Yeah, is it
is the asker referring to it identifying scam projects or how do we prevent bad actors who would use this application?
The question is more about
pan a model that is essentially looking to measure and quantify qualify investor behaviors. And it’d be used, for example, to identify individual wallets that may have up test no good. Now, if you see a wallet that’s got dozens of scam tokens in it. Does that? Is that information that you think is useful in the inference process? And if so, how?
Is it? I think that’s a very compelling question is I think about it more. Just now. I think, if we have case studies of the tourists wallets and find trades, at this one exhibit on chain, it wouldn’t be that far of a stretch to to provide like, like a scam score up to no good score. For for all wallets, depending on are there you know, archetypal scamming behavior, particularly to those wallets. So for example, if this wallet keeps on creating short lived, Project contracts, I think that’s pretty strong evidence that we should be suspicious of someone like this. And I’m sure there are all kinds of other behaviors that you know, together can form this composite score. So I think, yeah, that sounds that sounds very interesting.
It’s a really interesting idea to think about contract creation associated wallets to, I think that’s something that we haven’t really talked a lot about internally. But that’s a really interesting idea. And I think, at least from a very high level, to me, that just underscores the potential here. And we’re looking at all of qualifying the the value of an investor, almost primarily by their their trading behaviors. But we’ve talked a lot about how investing is this sort of multifaceted activity in crypto, you can be a really sort of terrible financial investor and do an awful job at actually making money, but be an amazing supporter of the projects that you’re invested in. And so that sort of begs the question, how do you qualify an investor like that? So I think that’s a really provocative question. We only have a few minutes left, so probably not worth going any deeper. The asker. Also, another question, it says, how many transactions or events in a wallet Do we believe are needed for this application to produce a prediction? And I know, that’s an incredibly loaded question early on. So go ahead and, and give us your best answer on that.
Um, so it’s, you know, we like to depends on how well do you want his application to perform? Meaning, you know, if you want super precise estimates, of holding time, so for example, in a previous iteration, we tried to estimate to the number of days, how long this one it will hold, that’s a regression task. If you want a higher level of performance, or preciseness, then the more data and the more behavior we observe about this wallet, the better. So there we we do filter wallets that have less than I think, five transactions, so we just disregard them. But that’s, we’re still exploring. Yeah, how aggressive do we want to filter wallet based on this available history? But at least five days, or sorry, five transactions? Yeah, at least five is what we’re seeing so far. That’s what we want to do so far.
And I think there’s another side of this question that potentially deals with refresh frequency, and in how, how real time, the updates of these investor scores would be. Um, so one of the things that we talked a lot about is canes such as like Solana, for example, that are producing incredibly high transaction volumes, our technical challenges to be able to, you know, ingest those transactions in real time and reproduce, you know, model output again in real time. Obviously, that’s accentuated based off of the number of transactions per second that a particular chain is capable of, but I think it’d be interest For you to, you know, opine a bit on what our thoughts are about that kind of production deployment question. So, for example, you know, is it reasonable to say that, you know, if a wallet has many years of history and 10s of 1000s of transactions, you know, another several transactions that have occurred in a day are perhaps not going to move the needle, whereas, you know, a newer wallet, that’s still, you know, accumulating that history, maybe that would so I don’t know if you have any thoughts about how often individual wallets would be updated with these scores? And if it relates to this, the question of how many transactions or how active a wallet is?
The lower latency at which we update all transactions, the better. We for like a pretty active wallets, that’s still trading, right before it touched our depth. We will like, like an up data update, right, before we provide a score. So on the other, there are more there ACOTAR wallets, in which case, I think it matters less. We will index the entire chain, you know, fairly frequently or relative to the quarter wallets. So for example, every day, we can we can run a query over the entire chain.
Right, yeah. Thanks for the
so not to call you out. But I noticed that you have your mic on. You’re unmuted speculation Hill. So if you had a question, feel free to jump right in. Don’t be shy to interrupt us.
NY. I appreciate the presentation. So far. I just been listening in and also trying to help them figure out the the mics thing. So been a little bit distracted by that as well. No sweat,
man. But we’re we’re very glad you’re here. So we’re right at the top of the hour. Definitely want to be sensitive of everyone’s time. Any last minute questions that folks want to sneak in under the wire? Simon, I know you’re having you don’t have a mic. But we can definitely chat more about your question offline. It’s a good one. And it’s one we’ve been thinking about. Yeah. Anyone else that has questions, feel free to chime in now?
Where can I get access to this presentation that we just saw?
I think it’s I don’t know if Jack’s mic is working right now. But typically he records all of this, and then we put it online. So I’m sure there’ll be some version of that, you know, worst case scenario? I don’t see why we wouldn’t be able to share these slides. But we’ll have to. We’ll have to chat internally about that.
Okay, that sounds great. Excellent.
Well, if anyone doesn’t have any further questions, right now, we’ll we’ll wind this down. Typically, and these by saying, you know, ultimately, you know, our goal is very simple. We want to engage the community as much as possible, because a lot of our a lot of our best ideas, and certainly a lot of our momentum comes from you guys. So we’ll continue doing these certainly will not be the last time we dive deep into the engineering side. But given the fact that we’ve gotten so many non engineering related questions in the last week or two that we didn’t get to today, next time around, we’ll certainly get into them. And so just to kind of close things out, I do want to mention, again, as I did at the last call that crypto is a global community. And while no one in our team to my knowledge is being directly affected by the conflict in Ukraine, I can almost guarantee that someone in our community is either being directly or indirectly affected by it. And so we just want to put our thoughts and prayers out to all of the people right now that are are either suffering or could potentially be suffering quite soon. So we’re here in any way that we can to To address these issues. With that, we’ll we’ll wind down and I want to thank everyone for joining today.
Thank you, Daniel. Thank you, anybody,
if anybody knows how to tell if a lot is the alerts by looking at it on the Explorer, please give me a little Satya.
That sounds great, folks. Well, we’ll, we’ll close this down now and again, thanks for coming and we’ll talk again in about two weeks.
Thanks. Thank you, everyone.
Bird is empowering dApp developers to create the Web3.0 UX of the future by developing wallet-level machine learning prediction products that are accessible within a permissionless, decentralized on-chain oracle. Developers that integrate with our products can, for example, offer variable defi loans or launchpad investment terms based on Bird’s analysis of the wallet’s past behaviors as well as off-chain data streams.
Behavioral prediction products fueled the growth of Web2.0 companies such as Google and Facebook, but centralization had led to power and profit disparities. Combining the power of ML with open and decentralized technologies will enable Bird to create an entirely new tech business model. Operational decisions such as how sensitive data are used and what user behaviors are analyzed can be made by the community (i.e, token holders), with community profit-sharing serving to align the long-term incentives of Bird administrators and ecosystem users.