Numeraire

데이터 기반 펀드 플랫폼

home link https://numer.ai/

reference material

Community

Exchanges that listed the coin
4
Symbol
NMR
Dapp
To be released
Project introduction

뉴메레르는 데이터 과학자들이 모여 구성된 새로운 종류의 헷지펀드로, 금융과 증권 분야의 AI 헷지펀드 구축을 목표로 합니다.

Executives and partners
There is no executive team information at this time
Please leave a message of support for the team

There is no partner information at this time
Please leave a message of support for the team

go to leave a message

Latest News

There is no news posted at the moment
Please leave a message of support for the team

go to leave a message

Medium

Epicenter podcast — Staking...

Epicenter podcast — Staking milestone— Towards Data Science coverage for Numerai🎙️ Epicenter PodcastThree years after his first appearance, Richard Craib returns to the Epicenter podcast for episode 348. Richard and co-hosts Friederike and Meher talk about what Numerai has been up to for the last three years, what makes a hedge fund market neutral, and the team’s latest projects, Erasure Bay and Numerai Signals.https://twitter.com/epicenterbtc/status/1283060739842940932You can check out the 2017 interview in episode 191 here.🥩 New staking milestoneThe total value of NMR staked on the Erasure protocol is higher than the value staked with Augur, Melon protocol, and Dharma combined.https://twitter.com/richardcraib/status/1271933499914588160Founder Richard Craib explains the significance:“The vast majority locked in Erasure is in the NMR token whereas money locked in other DeFi apps is usually in ETH or DAI-not to do with the protocol’s native token being used at all… money staked on Erasure is actually *at significant risk of being destroyed* through griefing/burning whereas money locked in other DeFi apps is usually sitting idle and at minimal risk.” — Richard Craib📊 Numerai tournament in Towards Data ScienceTop-tier data science publication Towards Data Science published a post by tournament participant Yuki:Numerai Tournament: Blending Traditional Quantitative Approach & Modern Machine LearningRandomized factor returns by Yuki from his post about Numerai on Towards Data Science🍇 Protocol statsChart and statistics provided by DeFi pulse as of 7.14.2020Total value locked is equivalent to:$2.3M USD9.6K ETH250 BTCNMR locked:115.1K1.05% supply locked🍕 Most interesting requests:💬 Tasteful Telegram sticker pack (open)https://twitter.com/ErasureBay/status/1267548990465175553📈 Unemployment data (fulfilled)https://twitter.com/ErasureBay/status/1268687508931375105🔒 MakerDAO Vaults (fulfilled)https://twitter.com/ErasureBay/status/1268254121225670657🎻 Bespoke music recommendations (fulfilled)https://twitter.com/ErasureBay/status/1274019489110044677🎨 Artwork (open)https://twitter.com/ErasureBay/status/1276205493791252482🔵 Coinbase exploring 18 new tokensCoinbase published a blog post announcing that they are exploring 18 new tokens for potential listing on their platform, including Numeraire.Connect with NumeraiTelegram / RocketChat / Twitterhttps://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefEpicenter podcast — Staking milestone- Towards Data Science coverage for Numerai was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 07. 22

A New Data Science Competit...

Not every data scientist wants to play “Find the best version of XGBoost”.Data science competitions have become stale. As the field of data science was emerging, landmark competitions like the Netflix Prize and early Kaggle competitions encouraged new algorithms and creativity. But now there are a handful of algorithms which are known to perform best at certain types of problems. Today, data science competitions typically boil down to: “throw 1000 different XGBoost models at the problem, cross-validate, and see which combination of hyper-parameters + preprocessing steps performs the best”. The process of building a good model on any dataset has become monotonous and automatic. (In fact, Google, Microsoft and others have automated it with cloud services aptly called ‘AutoML’ because these tools require no insight or creativity.)The Numerai data science tournament is different. Numerai gives out their grid searched XGBoost model for free and now poses a new challenge to its data science community: Can you build a model that’s different from what everyone else has submitted? The problem is no longer about finding the best parameters for XGBoost, but about building an original model that hasn’t yet been discovered.A New Type of Data Science CompetitionIn the Numerai Tournament, participants will be paid not only for performance — they will also be paid for originality and uniqueness.The Numerai tournament, if you aren’t familiar with it, is a data science competition where participants are given a dataset that appears to be a simple regression problem. In reality, the data is obfuscated stock market data, and the participants are predicting future price movements. Those predictions are then combined by the Numerai Meta Model, and that meta model is used to control the capital of the Numerai hedge fund.Numerai LeaderboardI now work for Numerai full-time, after being a participant in the tournament for a while before that. I had always avoided other data science competitions because they felt tedious — like my purpose in the competition was to just put the problem into my hyper-parameter tuner, and then throw as much compute as possible at the problem. Numerai drew me in, though. The idea of many different data scientists submitting unique models to help control the capital in a real hedge fund is incredible. Then, when you start playing with the data, you see that it feels nearly impossible to beat the model that they give out for free, and you are forced to think deeper about how other users are managing to climb the leaderboard. Even as unique as it is already, we are now making the Numerai tournament much, much more interesting.Previously, users had been paid only according to how well their predictions match up with what really happens in the market. If a user’s predictions perform better than random chance, they are rewarded. If they perform worse, they are penalized. The result of this is many users submitting very similar models of the same structure, because that structure is known to perform consistently well.As it turns out, Numerai doesn’t necessarily benefit from getting 1000 submissions that all predict roughly the same thing… only the first 1 or 2 of those submissions are really useful, and the other 998 might be redundant information. Numerai’s true power emerges from having many unique models that all have different strengths. Then those unique models become individual building blocks, and we can combine them in a way that creates an incredibly powerful and unique portfolio.We know that the Numerai dataset is rich, and to get all of the information out of it, we need users to try new things. Users are already using modeling approaches on our data that we have no idea how to recreate. We want to encourage everyone to continue to develop models like these. These types of users are extremely valuable to the meta model, but are not being proportionally rewarded yet. That’s all about to change.That’s why we’ve introduced Meta Model Contribution. Meta Model Contribution estimates how valuable each model is to the meta model that runs the hedge fund, so that those users can be paid based on their real value added.The result is an incentive structure that aligns directly with the hedge fund. By reorienting the very objective of the tournament, we are turning all of the data scientists into hyper-efficient data-miners for the hedge fund.The New Data Science ProcessData scientists will be familiar with the idea of having an optimization function. This is simply the calculation of a metric that measures the performance of your model, so that you can compare your various attempts against one another in an objective, quantifiable, and potentially automatic way. In typical data science competitions, performance is one-dimensional. For example: “Maximize the percent of rows classified correctly”, or “Minimize the average squared distance between each prediction and its respective target”. In any case, the data scientist will try hundreds or thousands of different types of combinations of models, parameters, and pre- or post-processing steps, and see which one gives the best result.The new process for the Numerai tournament will require consideration of an entirely new dimension — instead of only considering performance, the data scientist will need to consider their predictions’ independence with respect to other user predictions. An intuitive way to think about how to quantify a good model in this new two-dimensional competition might be performance * (1-correlation_with_all_other_models).When users are only paid on score, they tend to cluster together towards the top-right of this graph. The orange dots represent users who may not be the highest scoring, but have high scores relative to how unique they are. These users deserve to be paid more.Based on the way Numerai rewards participants, a unique model with a score of 0.01 might be rewarded more than a standard model with a score of 0.03.So a data scientist might first want to generate a handful of what she thinks are the most common types of models. Then she can write an optimization function that penalizes prediction-similarity to these models, and then use that new metric to iterate on her own pipeline to generate the predictions that she will submit. With this approach, she can construct a model that has a great chance of being extremely unique while also performing well on the dataset, and maximizing her payout.This is a rough first-cut at tackling this new tournament format. We expect that our community of data scientists will be able to push this to the limits, far beyond any ideas we currently have.Meta Model Contribution aims to reward the users who are best able to find these unique and valuable approaches. The way we can quantify this is by first building a meta model from all users’ submissions. Then we can take each submission and residualize (or subtract out) the meta model predictions from the submission. Whatever is left over after being residualized to the meta model is what we score versus the true stock market results. This encourages users to find new information in the data that few others were able to find.RewardsTrack your performance and contribution to the meta model each week and compete for the top leaderboard spotsWe’ve paid out $1,100,000 worth of cryptocurrency to users in the last 3 months alone. We want future payouts to be allocated to the data scientists who help the hedge fund the most.If you’re a data scientist or machine learning whiz, head to numer.ai to get started modeling, controlling the hedge fund’s capital, and earning your share of the tournament payouts.A New Data Science Competition Where Being Different Pays was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 07. 06

Fundraising — Improved FCAS...

Fundraising — Improved FCAS — Erasure Bay Highlights 📣Richard Craib delivering a keynote at ErasureCon 2019💸 Numerai raises $3 million in fundingNumerai raised $3 million in a recent sale of NMR led by Union Square Ventures and Placeholder with CoinFund and Dragonfly Capital. Numerai’s Founder Richard Craib also participated.“NMR is all about having skin in the game — it would be cringe if I didn’t participate” — Richard Craib“Richard and the Numerai team have built an exceptional platform over the past few years and the latest initiative, the Erasure protocol, is already generating valuable information and insights within its trustless marketplace”, says Alex Felix, Managing Partner at CoinFund.Joel Monegro (Placeholder) and Fred Ehrsam (Paradigm) in discussion at ErasureCon 2019“Enabling new markets and giving market access to more people are the killer apps of crypto, and we’re excited by the progress Numerai is making in both of these areas with the launch of Erasure Bay,” said Tom Schmidt of Dragonfly Capital Partners.Read more at The Block, CoinDesk and ChainNews:Union Square Ventures, Placeholder among the investors in Numerai's new $3M token sale - The BlockNumerai Raises $3M in Another NMR Token Sale With Union Square Ventures, Placeholder - CoinDesk对冲基金竞赛平台 Numerai 完成 300 万美元的代币销售,Union Square Ventures 等参投🏆 Erasure protocol tops DeFi Pulse daily growth chartRecently, the Erasure protocol saw its total value locked increase by over 18%, the most of any project tracked by DeFi Pulse:body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;} — @defipulsefunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}👨‍🎓 NMR’s updated report cardAs recently reported in The Street, Flipside Crypto’s Fundamental Crypto Asset Score (FCAS) for NMR grew significantly during May, receiving an “A Grade”.Image courtesy of The StreetFlipside’s FCAS is a comparative metric used to assess the fundamental health of crypto projects, providing institutions with a means to track important information on different digital assets.💅 Frontend facelift (cont.)Since the previous update, the Numerai team has been hard at work improving the Erasure Bay frontend to make requesting information even easier.Keep your eyes on the Erasure Bay website as it evolves, and remember to let Jonathan know if you have any feedback or suggestions!🐍 Mike talks Python and machine learningNumerai data scientist Mike Phillips was quoted in a recent post by neptune.ai about best practices used by 41 different ML startups.On Pandas and Scitkit-learn, Mike said:“Modern Python libraries like Pandas and Scikit-learn have 99% of the tools that an ML team needs to excel. Though simple, these tools have extraordinary power in the hands of an experienced data scientist.”🍇 Protocol statsChart and statistics provided by DeFi Pulse as at June 4th, 2020Total value locked is equivalent to:$2.4M USD9.9K ETH244 BTCNMR locked:96.8K0.88% supply locked📊 Community-built analyticsThe talented data scientists at Omni Analytics Group updated their Erasure Bay analysis infographic:Notable takeaway: most numbers have roughly doubled in the past month!Check out the whole graphic on Twitter.🍕 Most interesting requestsErasure Bay saw 49 new requests in May. Here are some of the highlights:📊 Get published (open)https://twitter.com/ErasureBay/status/1257071836241604610🔢 Updated infographics (closed)https://twitter.com/ErasureBay/status/1263155165315469312🎶 Beauty (closed)https://twitter.com/ErasureBay/status/1265511086255677442👨‍🔬 User research (closed)https://twitter.com/ErasureBay/status/1262868376470163456💻 Software engineering (open)https://twitter.com/ErasureBay/status/1260314014145318913💟 Hiring (closed)https://twitter.com/ErasureBay/status/1258146288828166144🎓 Upcoming Office HoursThe next Erasure Bay Office Hours will be announced soon. Join the call to ask questions, give feedback, or make feature requests!In the last office hours, we received a development update on the then-recently released Reveals feature, thoughts on allowing multiple people to fund a single request, and Jonathan invited everyone to slide into his DMs with any feedback on the Erasure Bay user experience.Be on the lookout for the upcoming recap.Connect with NumeraiTelegram / Rocket.Chat / Twitterhttps://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefFundraising — Improved FCAS — Erasure Bay Highlights 📣 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 06. 04

Office Hours with Arbitrage #9

From April 30, 2020Arbitrage took a minute to celebrate nine straight weeks of Office Hours before thanking the Numerai team and the audience for joining, and jumped straight into the questions from Slido.Questions from SlidoFrom chat: The NMR value staked is different from the Staked Total tab now. Did the increases for pending rounds get dropped from the running total?Arbitrage recalled that community member Jrai answered this question on Rocket.Chat before asking the audience: “has anyone figured out how to align the reported stake value with what’s actually going on? I haven’t either. I know the team was working on that this week, so I assume that’s what was going on.” He did notice that something was updated in the submissions tab, possibly the hover text over the chart.Submissions tab, now featuring more detailed hover text.Will the Rep Score and Leaderboard matter much after bonuses are gone? Other than bragging rights… Will the Rep Score be replaced with something else?“I think the intrinsic value of the tournament is your rank,” Arbitrage said, “kind of like Kaggle.”He hopes the team keeps Reputation, whether or not it’s rewarded, because rank can be a way to document achievement. His students, for example, can include their rank on a resume. “It still is a positive indicator.”If the weighted average of your reputation is positive, it shows you’ve consistently produced high correlation predictions, and that can be ranked over time. Arbitrage added that, while Rep Score might not matter much beyond bragging rights without the bonus, data scientists should continue to submit predictions in the event Numerai introduces a new payment system based on some kind of rolling performance. Should that happen, it’s in the participant’s best interest to have weekly submission consistency.“Don’t skip weeks.”Joakim: I like the leaderboard as well, but it would be nice if we were somehow compensated as well, for being on top of the leaderboard.Arbitrage: Yeah.Joakim: But I’ll also say I know that’s easy to game, and I don’t want something that’s easy to game.Arbitrage: Well that’s the risk, right?Joakim: Yeah, I don’t know the solution.Arbitrage: Yeah, that’s the risk. I once suggested doing a once-per-quarter bonus. Maybe every 12 weeks you get paid out in a nonlinear function on reputation? But again — MadMIN is still top of the board, and it’s been there for a long time. To your point, it’s difficult to avoid the gaming. I think we just go with what the team gives us, because throughout the years there have been dozens and dozens of proposals on ways to compensate people without staking, and ultimately it just doesn’t work.Arbitrage brought up a previous tournament iteration where competitors could earn badges for different achievements, adding that he would support additional leaderboards for different metrics: highest sharpe over 20 weeks, last-minute submissions, longest running model, etc.Joe asks: Would you blend or optimize your models to maximize Validation 2 correlation or sharpe? If so, what is your strategy to avoid overfitting the Validation 2 data?Arbitrage answered by explaining that he’s not certain maximizing on the Validation 2 data is the correct path because the data is from a crisis period and, as Richard has mentioned previously, the data isn’t a full year. Because it encompasses the COVID-19 drawdown, it represents a special segment of the Numerai data. “I would be concerned to have a model that performed well on Validation 2 but perhaps didn’t do well on Validation 1,” he said, “I want it to do well on Validation 1 and 2.”How do you do that? Arbitrage said he doesn’t know, he hasn’t experimented enough with Validation 2 yet, but he’s created models trained on both validation data sets. And as of the week before this Office Hours, Arbitrage finally installed XGBoost.“Install XGBoost.”“I’ve got that spooled up, the models are uploaded to Compute, and I’m going to wait and see how they perform.”Arbitrage said that he ultimately doesn’t think Validation 2 is relevant enough to be the sole focus of a single model, adding that if you have a model that performs well on both sets of validation data, or only has occasional misses on Validation 2, it’s probably a good model. He would be concerned if it performed well on Validation 2 but poorly with Validation 1, as that probably indicates the model is overfit to subsets of eras.What’s a good correlation and sharpe for Validation 2?We don’t know what good scores are for Validation 2 yet because it’s still too early, nobody has any results yet. “We’ll check back on it in a quarter,” Arbitrage said, adding, “some of us are training on it and some of us are not, I think that’s going to be an interesting bifurcation.”He suggested that tournament participants should take note of which models are trained on Validation 2 and which are not (in Rocket.Chat for those who are willing to post), then shared that his models Arbitrage, Leverage, and Culebra Capital will include Validation 2 beginning with Round 210. Arbitrage 2, Leverage 2, and Culebra Capital 2 only train on the training data set.Arbitrage: How about you, Michael Oliver? I’m calling you out.Michael Oliver: I actually haven’t had time to play with Validation 2 very much. It’s on my to-do list: I want to run all of my models through it to see how they do. I don’t know if I’m going to change any of them in the short term. I’m really curious to see how my feature-neutralized models do, they seem to be doing okay so far with live data.Arbitrage: Yeah, I can’t get that code to work. I have no idea what it’s doing.Michael Oliver: What it’s doing? It’s basically doing a linear regression on the eras and then subtracting it off. That’s all it’s doing.Arbitrage: So it’s performing a regression per era on the features, but if you neutralize your predictions in that first era, wouldn’t it depend on which era you pick as your first neutralization?Michael Oliver: No, actually, because you do each era independently: for era 1, you do a linear regression from the features to the target, get a prediction of the target, and subtract that prediction off. Now your new target is neutralized with respect to the features in that era. Then you do that for every era.Target: neutralized.Arbitrage: Ohhh, you’re doing it beforehand — I was looking at the code of the person who modified their predictions after the fact.Michael Oliver: You could do that too, but I was training on neutralized targets. That’s how you neutralize the targets.Arbitrage: See that I understood. When I saw the forum post, from I think Jacker Parker, they neutralized their predictions after the fact, and I didn’t quite understand how that was being done. But I think I’ll just stick with the target neutralization first.Richard: Yeah, it’s really a projection. You’re trying to find out what’s the orthogonal component. You’ll have some of your signal strength coming from one feature, or a few features, and if you ask, “what’s the model saying if I’m neutral to those features?” or orthogonal to those features, that’s what the code we shared was about.Arbitrage: Oh hey, thanks for that. I’ve been playing catch up on code since I finally fixed my XGBoost issues. I’ve been flying a little fast on getting all of this written down, so I haven’t really had time to sit and look at the code. Thanks for that, Michael, I did think it was a simple linear regression subtracting out.Michael Oliver: I actually did both, and the method I talked about in Rocket.Chat a while back, too, where you one-hot encode all of the things then do the linear regression from the one-hot encoded values. Like a generalized additive model from all the features to the target. So I have one model that uses that type of neutralization and one that uses the linear neutralizations that they did as well. They seem to be performing a little differently. Doing it the second way that I called super neutralized doesn’t leave much signal left in there.Arbitrage: Yeah, I would imagine. ‘Hey, let’s take a really hard data set and make it even harder!’Michael Oliver: Yeah basically.Arbitrage: Awesome. If you’re willing to post the code, I can put a notebook together and we can add it to the example scripts.What are some recommended ways to use the feature categories?Arbitrage explained that XGBoost has a way to designate which columns can be interacted, so he considered constraining XGBoost to only consider interactions across the feature groups rather than within them (because the features are thought to be correlated across time, e.g. Charisma from era 1 will still be correlated with Charisma from era 120).He wondered what would happen if he restricted those interactions from happening within a feature group, and instead only looked at interactions across groups, such as Charisma interacting with Intellect.Arbitrage added that neural nets and XGBoost, two of the traditionally best performing models, both look at interactions, so that might be a way to leverage the feature categories.Elementary school-level question: why do unit 16 or unit 8 data types in Python help reduce memory on trees but not on Keras?“Good god I have no freaking clue.” — ArbitrageArbitrage’s computer science knowledge.Arbitrage admitted that he doesn’t have much experience in computer science (his background is finance), and so passed the question along to anyone who wanted to give it a try.Keno: I just posted a link to a question on Stack Overflow: it’s basically because it returns a real number, so you have to convert it to a float. I had no idea, I had to look it up, but it makes sense that trees and XGBoost can give you floats instead of real numbers, whereas most neural networks give you a “yes or no,” binary output.Arbitrage: Makes sense to me, I like it. I’d subscribe to your podcast.JRB: I think I could probably explain this.Arbitrage: Yeah JRB! Take it away.JRB: With neural networks, and for that matter linear models, it’s usually a good idea to standardize your input. Essentially, the way you look at a tree-based model is that it tries to split the data set with the best possible split, so there’s scale and variance. That being said, I don’t think there’s anything preventing you from training a neural net with intake features (that’s what I do for my day job). I do a lot of model quantization, which is essentially trying to compress models to fit them on mobile phones and embedded devices- there it’s all intake. It makes convergence a lot harder, but there are a lot of tricks. You can train a neural net with intake features, but it’s easier with fully primed features which are standardized with zero mean and intake variance.Arbitrage: Thanks for that, that’s very helpful.Michael Oliver: I think by default, Keras will just convert everything to float32, also, unless you do work to tell it not to do that. They usually want things to run on a GPU, which is usually float32, so by default Keras is just going to up-convert anything you pass it.JRB: One thing you could do if you’re using Keras: there’s a layer called the lambda layer and you could possibly feed it intake inputs and upscale the first layer in your batch to float32. I haven’t used Keras in a while, so I’m not sure if it will work but it’s definitely worth trying.Arbitrage: Yeah, what neural network modules or packages are people using these days? Anybody willing to divulge?JRB: I’ve been using Jax for a while now and it’s pretty good.Arbitrage: How about you, Mike P? The Master Key model’s built in which framework?Mike P: Master Key is built in a very simple framework called scikit-nn, it’s a basic Keras wrapper for scikit learn. It lets you play with all of your models like XGBoost just using simple, feet-forward monads, so it’s pretty crazy. It gives you access to things like dropout and all of the popular bells and whistles, but it doesn’t let you try crazy stuff like custom loss functions or anything like that.Arbitrage: I don’t believe you, I don’t believe you at all. But, to each their own. I really appreciate all of the community members stepping up to answer questions I have no idea about or have no business answering in the first place. I don’t do neural nets, don’t profess to know anything about them at all: my knowledge of neural nets is very basic and I know there are some experts in this crowd.I feel emboldened by the new machine I got. Does it make sense to make a massive neural net with hundreds of layers and tons of custom features, or am I wasting my time?Fresh off of building a new computer (with input from Joakim) and with XGBoost finally installed, Arbitrage related to this question. He doesn’t think it makes sense to build a complex model, referring back to his conversation with Bor (who uses an intricate genetic algorithm) comparing their model performance.“It goes back to Occam’s razor, which is going to be my default answer when it comes to choosing complexity over simplicity.”Was Validation 1 not very representative of the old test set? Is Validation 2 more similar to the new test set? Do you think Validation 2 is more representative of live data?Arbitrage thinks Validation 1 actually was a good representative set because he trained on it and maintained a ranking in the top twenty for two and a half months, saying it had to be representative otherwise he wouldn’t have performed nearly as well.Regarding Validation 2, Numerai didn’t provide a new test set, they’ve used a subset of the test data to create an additional validation era. He urges caution in treating Validation 2 like live data, because the COVID-19 regime change is included in the Validation 2 set.“It’s the combination of Validation 1 and 2 that matters,” he said, “because it’s more validation data than we’ve ever had before, and it’s disjoint in time and also regime. That’s an awesome validation set. I want to discourage the thinking of it in terms of ‘Validation 1 and 2’ and look at it instead as just ‘validation.’”Joakim: I plan to use Validation 1 and 2, with 2 as my test set at the end when I’m done with my model. If it does hold up, I’m hoping that it will do well on live data as well.Arbitrage: I think it will. If you can get consistency for the entire validation set, all, 20 eras? I can’t remember the whole count.Mike P: 22.Arbitrage: Thank you. If you do good across all 22 eras, you have a very good model. Previously, if you did well on all 12 validations eras, you had a pretty decent model. The additional eras add more validation. It makes your validation just a little bit better — as long as you don’t peek too often! You validate your model on the validation data, you’re done. That’s it. That was your hypothesis test. If you do it again, you have to divide your test statistic by two (if we were doing this in an empirical sense). Every time you take a peek at the validation data as an out of sample test, you’re reducing it’s validity as a test. That’s why I urge extreme caution with all this stuff.Joakim: If it doesn’t hold up on Validation, what do I do?Arbitrage: So let’s say if you get negative results across all eras?Joakim: Just start over with something else?Arbitrage: Yeah… sorry man.Arbitrage: Check your cross-validation, make sure you’re not looking at all the data at once in every model run, last week I mentioned I tell my students to divide the data into three sets, train models on each one, then average them together and you’ll get better performance. If you do improve, it shows you were overfit and the ensembling of the models cancelled out some of the bias and produced a decent prediction (even though it’s still overfit).What if I merge Validation 1 with the training data so I get more data to train on? I’m just a newbie to data science.Arbitrage noted that this combination strategy is exactly what he does. He said you can combine eras 1–120 with 121–132 and use that as the training data. The challenge with this method is that you don’t have any data to use for validation, so you have to upload your predictions to the tournament then wait for your scores to post to see how well the model performed.If you wanted to try this strategy, Arbitrage said the important things to remember are to make sure the parameters are set for each model, and do everything you possibly can to avoid overfitting.Bor asks: Are there other indices out there (like the VIX) that track something interesting? Maybe a zero-beta fund or index?As Arbitrage pointed out, finance people love building indices and portfolios to track different metrics or hypotheses. He said that there is actually a hedge fund index (such as the one from hedgefundresearch.com), and those indices have different categories of hedge funds.There’s also an AI index and an equity/quant index. Richard added that a lot of funds that are doing well don’t report to any of the hedge fund indices, whereas the ones performing poorly do, so these indices may not be the best.Arbitrage asked Richard if he’s aware of any indices that track flow of funds into different strategies for hedge funds, but he wasn’t aware of any.How can I time the moment so I can change the stake of my model?This refers back to the topic of risk management (discussed at length in the previous Office Hours).“I decided I don’t want more than 400 NMR at stake on Arbitrage, so in the current regime I have to guess if I’m going to go over and time it one month out how much I should withdraw. The alternative is: if your model is consistently growing, queue up a withdrawal of a fixed amount every time that you can.”To get more insight into what future staking and withdrawal systems will look like, Arbitrage turned it over to Jason or Mike P to chime in. Mike P noted that it’s still too early to discuss in great detail, but they are working on redesigning the staking mechanism based on feedback in Rocket.Chat, particularly because some rules changes that shift the tournament from a daily to a weekly mentality make previous methods obsolete.Are there signals showing changes in the market conditions? #StakingStrategyisnotDeadAribtrage: I completely agree that #StakingStrategyisnotDead but I just don’t know how we can utilize any information to improve our staking outcomes other than we should be able to adjust our stakes down as fast as we can increase them.Marcos’ book talks about discrete maths and quantum computers: is there an introduction about these topics?Author’s note: this question refers to Advances in Financial Machine Learning by Marcos Lopez de Prado, newly announced as Scientific Advisor to Numerai.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}Marcos Lopez de Prado announced as Scientific Advisor to Numerai at #ErasureCon @lopezdeprado — @numeraifunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Outside the scope of his field, Arbitrage wasn’t sure what good content primers on these topics would be, but suggested asking in Rocket.Chat.Arbitrage: And spiking neural nets? What’s with all the neural net questions this week? You guys are killing me. I don’t have a clue what a spiking neural net is and I don’t think I want to know. I’m going to punt on that too. I thought they were fake, but they are a real thing and it’s actually pretty interesting but I don’t have a clue if you could implement something like that and have it work.Michael Oliver added that for the Numerai tournament, implementing a spiking neural net is probably more trouble than it’s worth. “Generally there’s no real advantage for spiking neural nets for most statistical machine learning problems,” Michael said. “Theorists find them interesting for modeling what brains actually do but if you’re just trying to learn a function, there are more straightforward ways.”Arbitrage then added that if you have the ability to create a spiking neural net and can iterate it, it’s probably not a bad thing to try because it will most likely have high MMC (because nobody else is using that strategy).Why are you not using staging for deploying changes to the user interface?Arbitrage redirected the question to Mike, who immediately called for backup. His interpretation of the question is that person wants to know if users can have more input before big UI changes.Mike P: My response to that would be probably a lot of it is turnover time … Patrick, I see you jumping on, thank god.Patrick: We’ll test it in production. I think we can do more testing in staging. Multi-accounts are actually in production now, but they’re feature-flagged in a beta group. I think we can do more of this testing, it’s just a matter of us implementing it. It’s great feedback.Unfortunately, I can’t participate (differences in time zones), but I want to listen to what you will discuss in Office Hours. Can you record and post a link to the video?Arbitrage pointed out that the Office Hours are recorded, but in effort to keep it an intellectual safe-space (where anyone can, and should, feel encouraged to ask any questions and discuss openly), the recordings are not shared publicly. However, each week’s Office Hours are summarized and published on Numerai’s Medium page.He also teased that over the summer, he’s looking forward to producing more content and playing around with how it’s shared with everyone.Arbitrage asks Michael Oliver about his era-boosted trees: The only optimization parameter for the era-boosted trees is the correlation, is there a way to do era-boosted trees with two optimization parameters?Enjoying his newly-installed XGBoost, Arbitrage was experimenting with the era-boosted tree code Michael Oliver posted on the Numerai forum.Read more about era-boosting in the original thread.Arbitrage: I played around with the proportions, the number of trees — this thing is so grossly overfit I don’t know what to say.Michael Oliver: I mean yeah, you can play with all of the parameters of XGBoost as well, too. You can change the column sampling, the proportions (as you said). You can add whatever metric you want. It’s just using a mean-squared era to fit the thing, and you’re choosing which eras based on whatever metric you want. You could potentially put an auto-correlation metric in there, too.Arbitrage: That’s something I want to improve. I finally got it to a point where I have a correlation on validation using the era-boosted notebook I put up, I’m at 0.037, but I think it’s grossly overfit because I’m showing sharpe scores of five or eight, and correlation scores of 0.4 in some cases.Michael Oliver: Yeah, that’s a tricky thing to evaluate. If you look at the Integration Test model, its in-sample performance is super high. This idea that your in-sample performance and your out-of-sample performance should be the same doesn’t really hold.Arbitrage: Not in this data set.Michael Oliver: There’s interesting reasons for that, but the only real way that matters for evaluating how overfit something is, is out-of-sample performance. Worrying too much about your in-sample performance being too high, I don’t think it’s worth it. All that matters is the generalization performance.Arbitrage: One thing I tried to do: I took the era-boosted notebook and I put 100 trees per step and I did 20 iterations to get to the 2,000 estimator equivalent of Integration Test. I’m still tinkering with that, but I find it very interesting. My concern is that it’s over-sampling some eras far too often.Michael Oliver: I noticed oscillations of groups of eras falling in and out. On the histogram, it would get flat and then jagged, flat then jagged.Arbitrage: Mike’s nodding along in excited agreement here.Mike P: The proportion parameter’s really important to tune down that oscillation. If you turn down the proportion parameter, you’ll get much less oscillation and more consistent growth. But, 0.5 is what we’ve found to be the best in our tests. If you don’t like that oscillation or don’t trust it, you can try to get that down to 0.2.Arbitrage: What did you guys internally tinker with? Just so I don’t have to do it myself.Mike P: Not too much — it was an idea that had been floating around for a while and we wanted to put something out there so I threw together some code and so it wouldn’t take too long to run, I only used like, 200 trees. I played with the proportions a little bit; I saw the oscillations as well and wanted it to be a little bit smoother. But it’s all still wide open, I don’t know what’s best, honestly.Arbitrage: Yeah, I’m going to toy with it some more. But at least there’s a notebook out there that works. And Michael Oliver, if you’re still willing to share it, I’ll put up a notebook with the feature neutralization code.As a special surprise, during closing remarks Michael Oliver announced that, one month from recording, he would be joining the Numerai team. 🎉If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage : follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date.Thank you to Richard, Mike P, Patrick, JRB, and Michael Oliver for fielding questions during this Office Hours, to Arbitrage for hosting.Office Hours with Arbitrage #9 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 05. 21

Office Hours with Arbitrage #8

From April 23, 2020After a busy week since Office Hours #7, team member and pizza lover Patrick Schork joined Arbitrage to talk data before diving into the questions from Slido.The one where Arbitrage interviews Patrick SchorkArbitrage: Why don’t you start us out … I don’t think you’ve even introduced yourself in the forum yet.Patrick: Well I actually built out the forum. I haven’t done a formal introduction, but I’ve been planning to do a response to Anson’s Humans of Numerai, and I’m not going to give away what I’m planning to post there. But anyway, I joined Numerai earlier in the year (I worked with Anson at Uber, actually). A lot of the stuff I work on is backend infrastructure: I built up the forum, the whole single sign-on aspect, and what I’m working on right now is rolling out the multiple account support.That stuff has been happening under the covers for a while, and it’s kind of imminent- it’s going to be rolling out very soon. Some of our APIs, like Omni Analytics and UUA’s already have API updates that are going to support the multiple account stuff. So things are happening, and if anyone has questions I’m happy to answer stuff like that.Arbitrage: Very cool! So you get to answer some questions because I keep this list, this handy list that I’ve been using for a couple of weeks now. So you’re the next victim.He only checked it once to avoid overfittingArbitrage: I’m going to guess that you heard about Numerai from Anson.Patrick: Yes.Arbitrage: So that was my question, “how did you first hear about Numerai?” Since we know the ‘who’ maybe tell us the ‘how’? Did he call you? Did you have coffee? Did you have a beer? What was the circumstance behind it?Patrick: Anson and I were on the same team at Uber, in mapping, and he was really getting into crypto at this point. He was trying to buy some Ripple or something- I got into crypto a few years earlier and was part of the initial Ripple giveaway so I had some anyway and ended up giving him some Ripple. He gave me some … I think he had an Anson token that he was working on? Some little token that he had.We connected over the crypto stuff and when he decided to join Numerai, that’s when I first heard about it. I stuck around at Uber for probably another year and a half after that, but we stayed in touch. Then he and I synced up and he gave me the pitch on where they were; Erasure had just launched so that kind of sparked my interest, and I joined earlier this year.Arbitrage: Great! That’s awesome. So you live in San Francisco?Patrick: No, I live in the East Bay in Berkeley, actually.Arbitrage: Ah, but you’re still on the West Coast.Patrick: Yeah.Arbitrage: Were you working in the office before you all got sent home?Patrick: Yup, I was.Arbitrage: So we’re all under the same shared suffering. I wonder about the people who are used to working from home. Has this changed their habits much? I did a lot of working from home prior to this, and so I think I’ve adapted pretty well, but there are people who are not taking this well at all.Patrick: Yeah it’s funny- I have a wife and two kids and we just got a puppy right before this. He’s having a good time with all of the attention. I don’t know what he’s going to do when we go back to work.Arbitrage: A puppy? Well I’m not sure I wouldn’t want to deal with a puppy 24/7 — I like being able to leave for a little while so somebody else can deal with it. So you’re an engineer at the team, that’s what you do for a living. What do you do when you’re not working?Patrick: I was really into martial arts, primarily Aikido and some Brazilian Jiu Jitsu, but the whole COVID thing is really hampering on that. My dojo does Zooms but it’s really not the same. Other than that, I like to build stuff. A lot of woodworking, concrete, repairs around the house- things like that.Arbitrage: Gotcha. So what programming language do you use and why?Patrick: Right now, I do a lot of Elixir. JavaScript is another popular one, and Python. Those are probably the three main languages I spend most of my time in.Arbitrage: Makes sense. Have you participated in the tournament yet at this point?Patrick: Not really. I have my own little Integration Test that I use mostly for making sure all of the changes I’m making aren’t breaking things.“Thank you!” — Arbitrage to PatrickPatrick: I don’t have any serious models that I’m working on or anything like that.Arbitrage: Alright, so who’s your favorite team member? I’m going to guess Anson.Patrick: My favorite team member at Numerai? I would say probably NJ.Gauntlet = thrownArbitrage: There we go. The race is on.Patrick: She’s a boss.Arbitrage: Anson has a vote, NJ has a vote … Finally we have some people picking, so that’s good… Normally I would ask Patrick what your number one feature request or improvement for the tournament would be, but since you’re working on one, maybe you can just tell us a guesstimate on a release date for SAMM [single account multiple models]?Patrick: The plan is to have a really light beta roll out with a couple of select folk, and then open it up to the rest of the tournament user-base. This is imminent, it’s probably going to start next week.There’s a couple loose ends that I’m tying up. Expect a forum post from me soon talking about what happens to accounts, how they get absorbed, if you have any USD in your wallet you’ll have to drain that before you absorb it, how NMR gets transferred when you absorb accounts, and how the API keys change for models and stuff. So expect a forum post from me very soon, and I expect the beta period to be pretty short.Arbitrage: That’s great! Thanks for working on that — I’m pretty excited for it. Keno is saying show the puppy.Patrick: Hold on.Arbitrage: That’s right Keno, I’m watching the chat. And there were a ton of questions in Slido, too, so I’m pretty pumped for that. That’s my favorite segment of Office Hours: me trying to answer a question and then Richard telling me how it is. Or if I have no idea, I’ll just pawn it off to Bor or Michael Oliver. The more people I interview, the more people I can pawn it off to, so that’s working out pretty well for me. But if you guys stop showing up, I’m in a heap of trouble. Oh well, I’ll figure it out. Here we go, I think it’s puppy time.“NJ is melting.” — ArbitragePatrick: His name is Espresso.Arbitrage: I see that. NJ.exe has stopped.NJ: Patrick, he’s going in the spare office when we’re back, just saying.Arbitrage: That’s awesome — thanks for sharing, Patrick. So there’s so many questions, I think I need to just hop in on this.Questions from SlidoFrom Keno: How would you neutralize a model, example XGBoost, against the meta model?Arbitrage explained that the challenge here is not knowing what the meta model actually is. He joked that because his model is so close to the meta model, he could sell his weekly predictions to people to help them neutralize to the meta model.“I’m just kidding, but I really don’t have an answer for that because unless they give us some way to accurately neutralize against a set of predictions, we’re going to have to use proxies for the meta model.” Arbitrage said he could imagine a scenario where a data scientist wants to create the closest approximation to the meta model but without submitting it. Submitting the model wouldn’t work because, as it’s so closely correlated to the meta model (by design), the meta model contribution (MMC) score would be awful.“The payoff for me,” he said, “for MMC or the regular tournament is about the same. I’m a little bummed about that, but I think overall it’s a tremendous boost to how this all works.” He went on to say that someone neutralizing against a model they’ve created which approximates the meta model could be useful, but he’s unsure how long something like that would take.Arbitrage asked the audience if anyone has performed any analysis on the types of models that perform well, directing the question at Michael Oliver, who has done some experimenting in the past.Arbitrage: Have you seen any groups of users that correlate with some of the stuff you’ve tested?Michael Oliver: Not a ton, I haven’t looked too much into how other users are doing correlated with what I’m doing. There’s definitely a lot of models that are pretty close to each other and have high correlation with the meta model also. Probably variants of XGBoost and whatnot. The interesting thing that I posted in chat and seems to be true is that for linear models, their MMC and score tend to be pretty highly correlated, which is interesting. The couple of linear models I had protracting different regimes, their MMC and scores were very highly correlated, as were Madmin and Madmax. For the team’s linear model, too, that’s also the case. That’s one of the more interesting things I’ve found. You can sort of tell who’s got a linear model by looking at that.Arbitrage mentioned how in the previous Office Hours, he said that there are basically three types of models: linear, tree-based, and neural nets, and the model in conjunction with how a data scientist subsets the features or eras have significant impact on a model’s performance. He imagines that the meta model is some combination of the three model types, although the proportions are unknown.To clone the meta model, a data scientist would have to know the proportions of different model types that are contributing. With that information, a close approximation is likely possible by creating a weighting system in an averaged model that simulates the proportions of the different model types. “Without actually having the meta model to neutralize against,” Arbitrage said, “there’s no pure way. I believe Mike P has suggested we use Integration Test, but I think we can do better.”Madmin Drama: thoughts on the top model being engineered to cheat and not predict well, and the pragmatism of banning the top contributor to hedge fund performance?No, MadMIN drama.Arbitrage prefaced his answer by saying that he’s quit the Numerai tournament for months at a time in the past because he felt there was too much cheating going on. “When I play a sport, I try to follow the rules as they’re written, and I don’t try to invent ways to get around the rules.”He looks at the situation with Madmin through the lens of whether or not that person is operating in the spirit of the competition or purposely trying to find a way to extract value. He also pointed out that the Bot hunting channel in the Numerai community has been around for a long time, with several users who actively search out people who are exploiting the tournament.Pictured: Bot hunting channelArbitrage doesn’t like when users create models that don’t contribute to the hedge fund because ultimately, it hurts everyone, data scientists and Numerai alike.“If Numerai’s model does well, they can attract AUM [assets under management]. If AUM increases, payouts will increase. If payouts increase, we’re more profitable. If we’re more profitable, then we’re happy. I want that positive feedback loop to have zero interference.”A point of contention in this debate around what’s an acceptable submission is the idea that ‘code is law’ which, basically, suggests that if something is possible within the constraints of a given piece of software, it is acceptable, and any unacceptable or ‘illegal’ behavior needs to be prevented in how the software is programmed.Author’s note: read about The DAO hack for more debate around how ‘code is law’ impacts the blockchain and cryptocurrency industries.“While there are still humans involved,” Arbitrage said, “I have no problem with intervention from the team to set things straight. I think it’s called for and justified.” He added that the fact Madmin was the top contributor on both the traditional leaderboard and MMC leaderboard is very interesting. Arbitrage’s conclusion, after reading Mike P’s forum post, was that Madmin is performant, so actually contributing to the hedge fund, but the person behind the model designed other accounts to neutralize the risk.If a model is actually good, backtests will prove it. “I’ve always wondered if the team would attribute backtests to an individual user.” If a test showed conclusively that a particular model appeared performant but was actually terrible, it would be possible to send that user a message, prodding them to take action.“Hey, your model sucks. We did all of this backtesting and the actual results are horrendous. You should redo it. Or, wait 20 weeks to find out.” — Arbitrage’s proposed backtesting messageHe argued that in some scenarios, it would be okay for the Numerai team to let data scientists know the results of their backtests. That way, if a model performs uncharacteristically well, the data scientist will know it’s an outlier and not likely to last.Tl;dr: Intervention = good, cheating = badBor asks: What kind of risk management would fit well with staking NMR? Risk of ruin, and frequently take some NMR out of your stack, but what else?In a direct message, he told Bor that there are a lot of options so data scientists need to design their own system and stick to it. This is something Arbitrage discusses with his students: “They’ll come up to me and say, ‘I just got my student loan money, what stock should I invest in?’ and I want to lose my mind every time I hear that.”He tells them that if they want to trade, deposit an amount into their brokerage account that they’re willing to lose.“Your first year of trading is your tuition. You’re probably going to lose all of it, but you’re going to learn a lot.” — ArbitrageRisk management is a very personal thing, and Arbitrage stressed that if you design a system, you need to stick to it. One simple but fictitious example he shared using an investment portfolio is to sell a stock if it gains 20%, no matter what, and if you lose more than 7% cut your losses and get out. He added the caveat that this is a very simple system for risk management and isn’t very robust, which probably wouldn’t be effective for someone trading in a highly volatile asset class.But in terms of NMR: if you stake on models, you have the total, aggregate risk across all of your accounts and your exposure is the amount of NMR staked on each model. “If you double that, maybe you should take some out. I don’t know, there’s no pure answer.” In the MMC environment, data scientists are searching for models that are not highly correlated, and “that will certainly factor in because now we can get into mean variance optimization. I don’t know if that will translate directly.”Arbitrage’s personal style: “I just pick a fixed amount of NMR to skate and if it goes over that amount, then I’ll withdraw the excess as profit. If it dips below, I’ll ride it, maybe I’ll fund more if it reaches a 25% drawdown.” He added that it’s a personal perspective, but that considering the fiat value is important as well because your NMR exposure has a fiat equivalent. “We had a 6x run in fiat equivalency: your risk increased 6x on the fiat side… The best thing you can do is design a system and follow that system exactly. If you deviate from your own rules, you’re gambling. I don’t believe that gambling is a way to manage risk.”What criterion should one consider when choosing their stake on MMC or correlation? More generally, does it make sense to still allow staking on correlation?Arbitrage believes it’s still too soon to compare staking on MMC to correlation. “I had some crazy high scores on correlation,” he said, “like 12% for three weeks in a row, and I don’t think I’ve ever seen a score that high on the MMC side.”He’s in wait and see mode: “I’m going to wait and take a better look at it,” he said, adding “I would probably look at my MMC after a month, and if I had an indication that while all models are doing well, I’m also getting a high MMC, that would probably be your clue to look at burns: if everybody’s burning, how did your model do relative?”When you realize you’re only in the middle of the burn groupIn the past, Arbitrage said data scientists would look at where they were positioned in the burn group (whether they saw some of the worst burns or closer to the middle) and used that as a way to gauge variance relative to other models. His advice: have patience because results take time and to pick a stake and stick with it for a while.What are the differences between the various Integration Test accounts?As Numerai engineer Jason explained to Arbitrage, the Integration Test accounts are the same example model but submitted on different days of the week to test the system.Arbitrage: I don’t know if that’s still true. I don’t think Jason’s on… Jason? Jason?Speak of the devilJason: Yeah, I’m here and yeah that is true, that’s exactly what that is.Arbitrage: Oh! Jason’s here! Awesome. I’m going to pick on you next week, so you better come back.Jason: AlrightAuthor’s note: he did.Can you create virtual eras to simulate a financial crash or economic boom or whatever? I’d like to know how my model fights against all odds.“If only we could create such a thing.” — Arbitrage, full of sorrowArbitrage explained that he didn’t think it would be possible to create a synthetic data set because outside of Numerai, no one knows what the target or feature columns represent. Simulations of different economic conditions would have to be created by Numerai. “But honestly, I don’t really care. I want my model to be good on average. It seems like we all did pretty well during the recent selloff, and like we all recovered well, so it seems that on average we’re doing pretty good.”He added that if you’d like Numerai to create something like this, suggesting it in their Feedback channel is a good way to raise it to the team.Would you recommend using the data from previous rounds for training and validation?Arbitrage pointed out that there is no data from previous rounds. The training data doesn’t change, and the validation data doesn’t change (although more data was recently added). And while many people request having old tournament data converted into training data, “it would just cause us to overfit.”He added that he doesn’t want more data, finding what is currently offered to be sufficient as long as it remains indicative of the market. He also recommends against using older tournament data as training data because anything you add will change your model, and stationarity is important.What’s the mean correlation and standard deviation of the meta model on live eras? Does it have a fat left tail?Arbitrage: Oh Richard… Where’s Richard?Richard: Does what have a fat tail?Arbitrage: The meta model. Does it have a fat tail?Richard: I don’t know, I mean, it has a tail, I don’t think it’s fat. If you look on an era-basis, you can see there are some eras where things have gone wrong, but I wouldn’t say it’s catastrophic, it’s within the distribution normally.Arbitrage: I would guess that it would follow the distribution of returns for equities. It would make sense that it would. If you’re trading on equities, I’d imagine that your return series is going to be similar. But, I’ll let your non-answer stand.Richard: [The data scientists] are not buying particular equities ever, you’re submitting predictions on the entire market, so it’s very much normal because of all of the different predictions.Do we know if the new MMC 2 will basically work the same as the current reputation bonus if MMC 2 bonuses are based on stake 100 days ago?For MMC 2, there is no bonus, payouts are based on a model’s performance in a round.Richard: Yeah, anything you make that looks back in time is going to have the same kind of problems we’ve seen with Madmin, so everything should be on a looking forward basis, and on your current stake.Arbitrage: I think Zen’s pretty happy with those changes, based on those payout curves. I would be too, by the way. He’s smiling!Zen: It looks really good, I’m ready to check the box.Arbitrage: I’m a little jealous because I don’t have very high MMC which is that stability I like.Pictured: ArbitrageArbitrage: I think without the bonus is fine, especially with the 2x multiplier on performance. I think that’s a big incentive. I’ll be interested to see, without the rolling bonus approach, how the users will respond. Will we have MMC chasers?“We see this with mining in Bitcoin with Bitcoin Cash and all the other various iterations. At first, the miners switch to the more profitable blockchain. So I wonder if users will switch from XGBoost to neural nets week-to-week, to chase where the uniqueness lies. It’s my hunch that there are three different models, and I wonder if we’ll all jump back and forth among the three. Or we’ll stumble into different subsets of eras like Bor is doing, or we’ll come up with some blend therein. I think it will be very interesting.”Is optimizing sharpe the best way to reach a high MMC without resorting to hackish methods like Madmin?“I don’t know, I have no idea because I don’t optimize sharpe.” -ArbitrageRichard: What do you optimize?Arbitrage: I optimize correlation, and that’s about all I’d like to say. I keep it simple.Richard: You optimize your performance on the target.Arbitrage: Yes. The objective function is achieved.Arbitrage couldn’t answer the question himself because he doesn’t optimize for sharpe, so opened the floor to anyone who had experience with it. Michael Oliver previously experimented with optimizing for expected payout and then optimizing for sharpe and found the results to be very similar. Bor also tried optimizing the payout function in an older iteration of the tournament, and had similar results to Michael’s. Now, he’s optimizing sharpe minus feature exposure.Arbitrage argued that because data scientists ultimately want performance through multiple eras and high MMC, optimizing for just one thing is probably not the best option. He doesn’t believe his models are particularly good, just very stable. “That gives me the opportunity to ride the top 100 all the time, instead of having sudden drops and rushes to the top.”It would be a nice feature to stake on other users’ models. Do you plan such a feature?This will never happen because it’s too close to gambling to be within regulation.Do feature interactions make sense for feature engineering?Arbitrage pointed out that tree models take into account interactions and are also performant in the tournament, so feature interactions matter. “Would I want to create my own? I’ve tried that before, very early in the tournament, and it never worked out. But this was three years ago, I haven’t done it since.”He thinks interactions make sense, but is unsure that, with the way the tournament data is constructed currently, it’s conducive to create your own features (because tree models and neural nets do it on their own).Should you Z score the features by eras before interaction? Could you use a neural net to find which new features to use?Arbitrage passed the mic to Bor for his insight after building a complex genetic algorithm for the tournament.Bor runs his model once to generate a series of solutions and excludes the features that have the highest correlation. He repeats this several times then takes the average of all of those models (similar to what tree models and neural nets do themselves).Ultimately, Arbitrage and Bor agree that some feature engineering makes sense, with Bor adding that his manual process is similar to what neural networks would be doing.It would be nice to submit a model and get an immediate MMC backtest result.In the past, fast access to feedback data hasn’t been beneficial for the tournament because competitors could easily make small changes over several submissions to try to reverse engineer what the most impactful features or eras might be.Richard mentioned that one thing they’ve considered doing is adding new metrics after uploading predictions beyond validation sharpe, such as correlation after Numerai neutralizes the model with the example predictions. “If that has correlation that’s positive with the target,” Richard said, “that’s a pretty strong sign that you have some MMC.”Richard added that they wouldn’t be giving data scientists backtests over the test set, but some information on the validation sets could help them decide whether or not to target MMC with a model.Do Richard and Slyfox have their own models in the tournament? If so, what are their names?Richard: I don’t, I never really have put one in. It’s maybe because I’m lazy, and I actually do want to get some in. Maybe convince Anson to do it, make sure they’re all hooked up to Compute, or just run automatically. But there is actually a model that’s one of ours (which I was going to mention in the era boosting post I wrote). This model, Sugaku, is a Numerai employee’s model that uses some of the era boosting ideas. It’s not only [era boosting] so I didn’t want to bring it up as an example of that, but it does use some of the ideas and it’s had a very consistent score.Arbitrage: I’ll tell you that you might like your Sugaku, by my student’s ranked higher at 68 to your 73.Joakim: Sugaku means ‘smart’ in Japanese, by the way.Arbitrage: That’s cool — did not know that.Does it make sense to optimize some of your models for correlation and others for MMC or is correlation something we should forget already?Arbitrage summarized his position from earlier: at the moment, it seems like a better strategy to diversify and split your models between both options. Ultimately you want a model that’s performant over time but also has high MMC. “The answer to that,” he said, “is to build a performant model across all eras and is relatively stable, and then you can have a little bit more faith in the model that will do well.”I understand what the p, 1-p attack is, but it’s not clear to me why the analytics of Madmin’s model have shown the model itself to not be particularly interesting, just a linear combination of a few features. Isn’t that the highest performing model?Put another way: how is the Madmin model ranked so highly on the leaderboard if it’s not doing anything noteworthy? Arbitrage suggested that if the model is just a linear combination of a few features, it’s possible the creator found or stumbled onto the strongest features that are working right now under the current regime.Richard: It’s not a good model, it’s not like you should try to use only a few features. It’ll give you a high variance model, and if you have two such high variance models, then one of them will do well and one will do badly. It’s not really a good thing to do long term.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage : follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date. And remember to stick around until the end for the exclusive conversation that doesn’t make it to publication.Thank you to Richard, Jason, and Michael Oliver for fielding questions during this Office Hours, to Arbitrage for hosting, and to Patrick for stepping up to be interviewed at the last minute.Office Hours with Arbitrage #8 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 05. 14

Placeholder thesis — Design...

Placeholder thesis — Design update — Aliens Exist 👽There is now over $2 million staked on Erasure. Follow @ErasureBay on Twitter to see new requests. Post your questions for the next Erasure Bay Office Hours.Request fulfilled📜 Placeholder Erasure thesis“Erasure solves the problem of bad information online.”— Joel Monegro, PlaceholderVenture capital firm Placeholder published their Erasure thesis. Joel Monegro breaks down the Erasure protocol into its three core elements (payment, recourse, and track record), and explains how these ingredients have combined to build the foundation for the protocol today.His conclusion?“It may be that the only way to distinguish good from bad information online comes down to how much value its creator stakes and their track record. That’s the vision of this protocol. And once you understand how it works, and its potential, it’s easy to see how it fits everywhere.”🎓 Upcoming Office HoursThe next Erasure Bay Office Hours is May 12th at 10am PT. Join the call to ask questions, give feedback, or make feature requests!Post your questions on Slido and follow Numerai on Twitter or join RocketChat for the link to the video call, which will be shared on Tuesday morning!In the last office hours, we got a simple explanation of what Erasure Bay is, Richard Craib joined, and we learned about Jonathan’s dream to see a dating app built on Erasure.👀 Reveal addedErasure Bay saw its first new feature with the introduction of public reveals for requests.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}Public reveals now live on @ErasureBay https://t.co/gLTQEIbP47 https://t.co/oKm6s8J8Sr — @thegostepfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}After a request is fulfilled, you can now choose to reveal the submission to the world, allowing anyone to download the file. To date, 10 requests have been revealed.🏖️ Frontend faceliftErasure Bay saw some updates to its frontend to make requesting information smoother:‘Request anything’ means request anything🤖 Staked on Erasure bot adds DAIErasure’s number one fanbot @ErasureStaked updated its regular broadcasts to now include total value staked on Erasure denominated in DAI as well as NMR.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}123869 $NMR and 5702 $DAI is currently staked on the Erasure-protocol, up 12578 $NMR and down 180 $DAI compared to yesterday — @ErasureStakedfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Tracking total DAI locked up in Erasure is one way to roughly gauge activity on Erasure Bay, which currently uses DAI for staking and rewards.🍇 Protocol statsChart and statistics provided by DeFi Pulse as of 8:00 am EST, May 8Total value locked is equivalent to:$2.5M USD11.8k ETH252 BTCNMR locked:81K0.74% of supply locked📊 Community-built analyticsThe data science specialists at Omni Analytics Group did an in-depth analysis of some Erasure Bay metrics and neatly packaged it into an infographic:From Omni AnalyticsErasure Bay all-star Klim teamed up with Richard Chen to build an Erasure Bay dashboard on Dune Analytics:Explore charts on Dune Analytics🍕 Most interesting requestsErasure Bay has seen nearly 200 requests, with 55 coming through in April! Here are some of the most interesting ones:🍄 Mushroom IDbody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED 📣 'ID request of this mushroom: https://t.co/GwnR1sYBYT Found in the Netherlands. ' - @Derek6000 paying $8.00 https://t.co/Oy87CeyuxO — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}🎥 Video of the full Epstein depositionbody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED 📣 'Full Jeffrey #Epstein deposition video (&gt;10 minutes) Very short clips have aired so media has it. Leads in comments. https://t.co/43qsHZzPAG' - @JonathanSidego paying $2000.00 https://t.co/l7XZW0X1wv — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}🛹 Skate park scandalsbody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED 📣 'a video of someone digging sand out of the Venice skate park and saying "Numerai or die".' - @richardcraib paying $420.00 https://t.co/dU1pKtQbBO — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}🌌 Numerai-inspired Zoom backgroundsbody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // Zoom pack à la PizzaSlime but Numerai/Erasure/Richie Craib inspired, ref: https://t.co/j39XSvWDxA // @tasha_jade paying $75.00 // https://t.co/VnnanyMb2g — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}👽 Aliens Exist?body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // Proof of extraterrestrial beings // @soonaorlater paying $50.00 // https://t.co/9l2IYjBnIP — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}💕True lovebody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // Zoomdate with attractive silicon valley-minded woman // @EldinhoC paying $10.00 // https://t.co/Vkvpts044E — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Connect with NumeraiTelegram / Rocket.Chat / Twitterhttps://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefPlaceholder thesis — Design update — Aliens Exist 👽 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 05. 09

Office Hours with Arbitrage #7

From April 16, 2020Kicking off number seven, Arbitrage welcomed data scientist Zen to his first ever Office Hours.The usual suspectsArbitrage: So I’m going to just open right with you because I imagine we’re going to have so much to talk about afterwards, I’d hate to run out of time.Zen: Okay. How long is this?Arbitrage: I run an hour, I stop right on time.The one where Arbitrage interviews Zen and goes over his one-hour limitArbitrage: Zen is one of our older users. Not in age, but in account age.Zen: Both!Arbitrage: You say both, but I can’t tell. You could have an AI running your Zoom right now — we don’t know. So you have three accounts, which one do you consider to be your number one account?Zen: Oh well obviously Nasdaq Jockey.Arbitrage: How did that name come to be?Zen: I’ve had that handle for a long time on Yahoo (I trade stocks). I just made it up. The second model is Evolvz. That one started out with genetic algorithms, so that’s why I named it [that]. And actually, the first model I ever put up was ZBrain.Arbitrage: Ah well then technically ZBrain would be your OG handle for this tournament.Zen: Oh yeah, that’s right, that was back in 2016.Arbitrage: How did you find out about Numerai?Zen: A friend of mine read a Medium article and said, ‘hey maybe you should go look at this.’ So I did and then hopped on.Arbitrage: You just said you joined in 2016, do you know the start date of your first account?Zen: Yeah it was December 12.Arbitrage: In 2016?Zen: 2016.Arbitrage: Okay so a little after the first wave, but still early on. And we’ve established now that you live in New York, or at least the New York area.Zen: New Jersey.True New Jersey prioritiesArbitrage: New Jersey, yeah like I said New York area basically. What do you do for a living?Zen: I’m a software engineer by trade, but I’ve had a pretty long career and ended up working mostly in defense. Eventually I became a manager, then a senior manager, took a few buyouts here and there. I’ve kind of come full circle: now I work for a company and I lead the AI department. I do a lot of hands on work too.Arbitrage: What programming language do you use and why?Zen: I use Python. I’m self taught, started a few years ago. I actually used Python maybe ten years ago for various little things when it was easier to use something that already existed. But I’ve used just about every language on the planet. Right now everything I do for Numerai is in Python.Arbitrage: I’ve generally found that to be true. Except Bor who likes to cut his wild streak and run his own way. But I imagine he’s going to switch to Python, he talked a lot about the simplicity.Zen: Bor is [running] R?Bor: I’m using Clojure.Zen: Very cool. I’m a Python lover, actually. I’ve used just about every language, but Python is great for just getting things done quickly. Maybe not speed, but some things are still good.Arbitrage: Python wasn’t really fast until, what, 2015?Zen: Yeah, absolutely. In the beginning it was very slow.Arbitrage: In your opinion, do you think that was a Moore’s law contribution? Or do you think we just got better at compiling this stuff?Zen: I think they got a lot better, and with that they’ve done on the backend … I read a little bit about it, but I think they’ve done quite a lot of work to make the core libraries run really fast. It depends what you do and how you do things now.Arbitrage: Oh for sure. I saw a tweet by Guido [van Rossum] and he was saying that people who are used to old-style Python should just ignore everything data science is doing. It seems like the data science community has almost “forked” Python for our own use. One of the questions that I have to ask, because you’re the legendary Nasdaq Jockey: can you tell us your top three tips for the tournament?Zen: Ha, well, let me think about that. I think the biggest problem most people have is they over-train still, even though they think they’re not.They’re training too much on the initial [data set], and if they’re using the Validation data they’re screwing themselves.Preaching to the choirZen: I don’t use the Validation data, and I try very hard not to over-train. I do a lot of things to make sure I don’t.Arbitrage: Alright, so that’s one tip.Zen: Consistency across the validation eras is important. There’s a couple of them that are really tough to get on, and that’s what Nasdaq Jockey does. It might not be so great at some of the eras in the validation data, but it’s really good on a couple of the tough ones. I’m looking forward to that new [Validation 2] data because now I’m interested in seeing how I’m going to have to change what I do to tune to the new Validation data set.Arbitrage: I’m going to ask you more about that in a second, but I’m still waiting on tip #3.Zen: One of the things that screwed me up in the beginning was that I didn’t keep good records of when I made changes of things. It takes so long to know how your model is doing. Just keep good records and go back and make small tweaks, not trying to make gigantic changes all the time (like changing states or models). I haven’t changed Nasdaq Jockey in a long time. With ZBrain I’ve been fooling around, but [Nasdaq Jockey] I haven’t changed in a long time.Arbitrage: Yeah, I haven’t changed anything with my Arbitrage account in maybe 18 months, beyond getting it adjusted for the different features. It’s done pretty well. Going back, you said of the new Validation data that you’re going to change a lot of stuff. But if your model’s doing well now, why would you change anything?Zen: Well I’m probably not going to change anything with Nasdaq Jockey, but I have seven other Nasdaq Jockeys that I started three or four weeks ago.Arbitrage: That was another question, if you’re up to ten accounts now.Zen: Yeah, I have the three initial ones, and I just made about a month ago seven more. And they’re totally different. A whole different idea. I think they’re looking pretty good, actually.Arbitrage: Yeah, these have been some pretty easy eras lately, so I’m waiting with bated breath to see how this all turns out.Zen: Yeah, exactly.Arbitrage: One of the questions I like to ask people: who is your favorite team member?Zen: It’s gotta be Anson.This is the second vote for the slyest of foxes, the first was BorZen: *Laughing* I don’t really have a favorite.Arbitrage: But you finally picked one!Zen: He’s the only one I talk to.Slyfox: Yesss.Arbitrage: There ya go, Slyfox.Zen: I was in Pittsburgh and saw a bar called ‘Sly Fox.’Slyfox: You should share the picture if you still have it.Arbitrage: We should have the East Coast meetup at the bar.Sly Fox Taphouse in PittsburghZen: I don’t go to Pittsburgh very often.Arbitrage: Let’s hope we’ll be able to go to Pittsburgh, let alone worry about going very often… What is your number one feature request or improvement you’d like to see for the tournament?Zen: I don’t have a big rig or anything, I have an Alienware that I bought five years ago and I do everything on that. I wish the files were smaller. Not the number of records, I think that’s fine, it’s just that there’s so much waste. You can reduce that file size and make it 25% of what it is and still have all of the same features and data. I don’t know if [Numerai’s] looked at that, it just seems pretty wasteful. It’s time consuming and a pain in the ass.Arbitrage: That’s good, and I think that’s something Slyfox has talked about in the past as something they’d like to iterate on. It comes out of the box ‘float64’ and it could easily be reduced from there.Zen: I mean really, there’s five targets, you can use zero through four if you want. Then right there off the bat you’ll get a tremendous [improvement]. You can even make it a binary file if you want — I’m old school.Slyfox: Yeah for sure. It’s something we’re looking into. File size is also something that makes everything we do slower, internally. So yeah, we’re definitely looking into it. Good recommendation.Zen: Otherwise, I think the whole layout of the tournament with the leaderboard and MMC; it’s all good, it’s just very convoluted right now. It’s hard to tell what we’re going to end up with. You’re setting an objective function for the company — that’s the way I look at it.It’s like, their objective is to get the best models so that they can create a good metamodel. So they’re tweaking all of our rewards so we give them what they want. I think it’s working, at least it seems to be working. It’s hard to tell. I didn’t like the answer the other day when [Richard] said that he’s okay when people want to stake on the example model. I don’t know, that kind of seemed odd to me.Arbitrage: I was kind of irked by that too, but if you take a huge step back and think about it, the way it was answered made sense.Zen: I know it makes sense.Arbitrage: It is in the sense that it’s a zero-effort way to climb the leaderboard. I don’t like that because I want people to struggle as much as I did and so I want the path to be as difficult and onerous as possible so they don’t inadvertently surpass me, but I digress.Learning data scienceZen: I understand. I mean, it’s a competition, so you’ve got to have your own secret sauce so you can beat the other guys, but there’s a certain amount of collaboration we’re all doing (to a certain level).Arbitrage: Agreed. I know this is your first Office Hours, but there’s this section of the process in this Zoom series where we talk about some stuff, but don’t really say anything at all. And I think that’s the collaboration you might be referring to. So you said you’re up to ten models, eight total variations of Nasdaq Jockey — why didn’t you go for ZBrain or Evolvz and try something with those?Zen: Actually, at this point, they’re all similar. Well, the first three [Nasdaq Jockey, Evolvz, ZBrain] are similar, but the new seven are very different. Just because they’re the same name doesn’t mean they’re the same model. I keep track of everything that’s going on, but Nasdaq Jockey 1 has nothing to do with Nasdaq Jockey. Totally different. One through seven are all different.Arbitrage: Interesting. For me, I actually do use the numbers, they mean something.Zen: I wish I had started out like that, and just used Google accounts like that. I can’t wait for single sign-in.Arbitrage: Yeah, SAMM [Single Account Multiple Models] — we’re all anxiously awaiting that. That’ll definitely be good. So, you have pretty good confidence in your models: are you staking evenly across them, or do you still favor Nasdaq Jockey?Zen: About every three months I look at the performance and I weight the staking to the best model. I have more on Nasdaq Jockey, less on Evolvz, and even less on ZBrain.Arbitrage: Yeah personally I look at the approach I took to arriving at that model. If I think it has the best justification from a design standpoint — I came at it with a scientific approach and came to a conclusion that makes sense — I can believe in that a little more than something I cobbled together by chance.Zen: I just look at the stripped-down performance, not the bonuses, just how well did it really perform on the live data. That’s number 1 for me on staking. I don’t have the other seven staked yet, I have to transfer some NMR there.Arbitrage: Yeah, I’m waiting to see a little bit before I stake on some of the new ones. In the end, I probably will, but I doubt I’ll stake very large.From chat: Do we get to rename accounts with the new merger?Slyfox (in chat): Eventually yes. The username is pretty embedded in a few places (leaderboard, profile page, internal code) etc so it will take a bit of time, but eventually yes.Slyfox (in meatspace): Another question I’m thinking about is, “what can we build to help you guys track your changes better?” Keno had a lot of good suggestions here, and ideas for somehow letting you label your models in time. If you guys have any ideas how we can make that easier, that’s something we can also build. At the simplest level, letting you change your name might help.Arbitrage: Yeah, I don’t know. I’m kind of a fan of stickiness. My account is Arbitrage and has been since June of 2016. I don’t want to change that, I want it to stay nice and stable. I guess I’m old school in that sense. You change your profile picture, but your username to me is a fixed thing. It’s tied to the blockchain too, in a way.Slyfox: It used to be tied to the blockchain. Right now, it is not. The new set of staking contracts are only tied to your Ethereum address.Arbitrage: Well Zen, or Nasdaq Jockey… I’m going to call you Nasdaq Jockey because that’s who I want to beat. Thank you for coming in today and answering some of my questions.Zen: Hey, no problem.Arbitrage: It was really helpful. There is a theme, I’ve noticed, with a lot of the people talking about avoiding overfitting, make sure you average across the eras, and also take good notes. That was Bor’s number one suggestion: good note taking. You can see that that’s consistent across users at the top of the leaderboard. I’m really excited about the questions today, because this first one, I’ve thought about for a while.Questions from SlidoPretend I’m a five year old: explain exactly how MMC2 works (asking for a friend).“I’m not sure I’m going to do a good job, but I’m gonna give it hell.” — ArbitrageBanking on the fact that most people have played some team sport by age five, Arbitrage set up the following analogy: If you play a team sport, not everybody can be the pitcher (in baseball). Sometimes the team needs an outfielder, an infielder, pitcher, catcher, people who are really good at handling left-handed pitchers, etc. In the end, it takes all of the varied skill sets coming together to achieve victory for the team.Extrapolating that example to the Numerai tournament: if all of the data scientists competing were pitchers, then the meta model would be terrible. But if we had a bunch of unique skillets and played as a team, then we can win.NJ shared that Michael P used a similar explanation at Numerai HQ in the past (although Numerai engineer Jason didn’t quite agree).Michael P’s controversial example opted for a basketball team with four Shaquille O’Neals (one of the most dominant players ever but with a specific skill set) and posed the question: would that team be better off with a fifth Shaq or literally any other player with a different skill set (even if that player isn’t as talented). Slyfox and Arbitrage were quick to side with Jason and draft Shaq #5.“Data science is not a singular act, but a habit. You are what you repeatedly do.” — Shaq CraibAuthor’s note: Michael P’s basketball reputation dropped to -0.0547Slyfox tried his hand at an explanation, also choosing a basketball analogy in the form of the plus-minus score. When someone evaluates an athlete’s performance, they can look at their individual stats (like points scored, plays made, etc). But, you can also statistically measure how well the team does when that player is on the court compared to when they’re on the bench. If you play fantasy sports, this kind of scoring is already popular. “To me, MMC is just plus-minus,” Slyfox said. “Does the team perform better with you in it or not?”“But what if you are the team?” Arbitrage asked.Slyfox was not ready for that“In the case of my model,” Arbitrage explained, “I submit predictions on Saturday afternoon. And then the meta model is built after that. So if the meta model converges on the solution that I’ve already uploaded, I don’t get an MMC bonus.”Michael P: Yeah.Arbitrage: Yeah.Slyfox: Well, you’re not helping.Arbitrage: But I came first — you guys took my solution and now you’re not paying me for it.Slyfox: We don’t want to give people too much advantage for just being first. I think that’s one of the problems we had with originality (if you’ve been here for long enough).Arbitrage: Well wait a sec — it’s unlikely that I could predict with very high certainty the exact solution of the meta model. It’s the sum of hundreds and hundreds of other models. But the fact that I did, and my model existed in the top 20 for two and a half months suggests that it’s good and it validates the meta model itself. Yet I’m not getting any MMC for it because of the way that it’s designed.Slyfox: When we’re designing this payout, we still want to reward you for being good, but we’re not going to reward you because you didn’t add anything to the team.Arbitrage: I am the team. That’s what I’m saying: I’m the team, I came first, and you just stumbled into my solution.Slyfox: The timing of it doesn’t really matter, but yeah.Arbitrage: I’m just playing a little semantic game, but I’m sure I’m not the only one who encounters this problem. Just something to think about. I won’t be playing MMC because I have no incentive to, but I feel like if I am providing you the signal first, and then you stumble into my solution as the optimal one, well I think I should get something for that. Especially if other people are getting a larger piece of the proverbial pie just because they’re different. If I’m the only one complaining about it, well clearly you’re going to ignore me, but I bring it up because it’s an interesting problem that I’m thinking about.Michael P: Say that you’re the meta model. Now, when other people are playing MMC, in order to have positive rewards they have to be pulling the meta model in a better direction: they have to be better than the meta model. You can’t get good MMC just for being unique if you’re doing worse than the meta model. To get long term expected benefits from it, your model has to be better than the meta model.If you do have the meta model, if you have the best possible model, and the meta model is better than what anyone else could come up with, then no one will be making money on MMC anyway and everyone would just play the main tournament. MMC was designed to remove those inefficiencies and accelerate the progress towards the best meta model. So if you truly have the best model and the MMC was the best, no one would be playing MMC.Arbitrage: Just a note since this gets summarized and put on the web: I’m not claiming that I have the best model, it’s apparent that I don’t because I’m not number one now and I’ve only been number one for a couple of days. Just wanted to make sure I clarified that a bit.You said that you use Validation for training after having applied cross validation properly. Are you planning to use the Validation 2 data for training also?Arbitrage felt that his model is performing well at the moment, expressing that he mostly hopes Validation 2 doesn’t change his data pipeline, forcing him to go through his code and remove the new data.Arbitrage cleaning his data pipelineArbitrage doesn’t plan to change anything with his current models — at least at first. Using his remaining account slots, he’s going to train new models on the Validation 2 data and track their performance long-term. “I’m not changing my main models at all,” he said, “they’re really good and they’ve been good for a long time. And I am the meta model.”Regarding payouts: when do you (or anyone) think they will stabilize? How far are we from a fair payout system?“When do I think it will stabilize? Never.” — ArbitrageBecause the tournament deals with stock market data, Arbitrage doesn’t believe that it will ever truly “stabilize,” adding that “the second we think we arrive at a fair solution, everybody’s all in, some kind of regime change will occur and blow up our models and we’re going to have some kind of risk we didn’t account for and it will have to change.”The more relevant question, in Arbitrage’s opinion, is around reaching a fair payout system. “I think we’re still a ways off.” He explained that even though the lift in the NMR market was awesome, if you started staking right before the increase you also saw a 1:1 increase in risk. Because “fair” is relative to the person observing the system, Arbitrage said it’s possible to design a payout system that’s fair to a subset of users, but not for everybody. “I don’t know any possible way to satisfy everybody.”Keno explained that his question is mostly focused on situations where models are seeing negative reputation and negative MMC but still generating high payouts. To Keno, this suggests the incentives may not be optimally aligned to help Numerai because it looks like models are getting paid despite poor performance.Arbitrage pointed out that, during his tenure with the Numerai contest, the current payout system is the best that he’s seen so far. He noted that occasionally, there are models that have negative performance but still seem to be paid, speculating that it’s a quirky function of a model being highly performant the majority of the time with short periods of negative performance. “Just because it was wrong one time doesn’t mean it’s bad for the fund.”Keno referenced the leaderboard, explaining how a model in the 79th spot had a payout of over 400 NMR, while his two models in the top ten received around 50 NMR each. He said, “I’m thinking, ‘What am I doing wrong? Are my models that much worse?’ If they are, then the leaderboard is wrong and it doesn’t reflect reality. That’s my main concern.” Without payouts being directly tied to performance, data scientists lose incentive to increase their stakes.“I think it has a lot to do with scale,” Arbitrage responded, “it’s almost a wealth effect.” He explained how someone willing to put $200,000 at stake in the tournament is willing to take on a level of risk that many of the participants can’t relate to. “I would never risk that much. That’s trading houses … I would just buy a house.” But, because risk is relative, “this is capitalism so it all works out in the end. There’s a lot of compensation for those high stakers, but that’s exactly how it’s supposed to be.”Arbitrage believes that the “answer” to that question, or at least what he thinks Richard might say, is that if you want to receive bigger payouts, you need to do better and if you think you’re going to be in the top, increase your stake.Bor asked Keno if he won’t just catch up to the accounts who are receiving larger payouts, noting that Keno’s stake is growing relatively faster. Keno said that unfortunately, he’ll never catch up based solely on payouts because all of the models are growing exponentially, and the others are already higher on the curve.Keno said, “It’s a systems problem called ‘success to the successful.’” He used the example of governments taxing the wealthy and providing relief to those in need, concluding that by not engaging in any kind of redistribution, Numerai is okay with models receiving high payouts only because of a large stake and not because of how much the model actually contributes to the meta model or how well it performs.Slyfox thanked Keno for the question, saying that it’s something they at Numerai think about often but don’t really talk about. He shared his philosophy regarding fairness:I think there are two ways to think about ‘fairness.’ One is in the human sense in that each human is an individual and they need to be respected. This is kind of like how governments work: you need to have strong systems for identity in order to implement fair systems that are fair per person. In crypto, and a lot of the systems we study when we’re trying to design what Numerai wants to be in the long term, it’s fair per NMR. It’s like that because it’s really, really hard to implement sybil-resistant schemes on-chain. Instead of trying to say, ‘this is one human and there’s a maximum amount of stake this human can make,’ we have to make every payout as a percentage of stakes. We’re going to pay you regardless if you’re a human, you’re a dog, you’re an AI, or just some script that exists on the blockchain. We have to respect that NMR as the base unit of our payouts. -Slyfox“Obviously, we’re not there yet,” Slyfox said, “I don’t think there are autonomous AI agents competing on Numerai yet.” He sees the tournament as a group of humans working together trying to make the system work. Ultimately, he explained, the long term solution needs to be something that’s completely decentralized and has as few rules as possible. “When I think about stability and what we’re asymptotically going to move towards, the simplest and most fair is: you do well, you get paid, you don’t do well, you get burned in a perfectly symmetric way.”At its core, Numerai’s payout system still functions this way. The bonuses and compounding stake are additional payout avenues that Numerai uses to reward data scientists beyond what Slyfox believes is the absolute, fair, symmetric payout. These extra payments are necessary, at least for now, because the tournament targets are not yet in a place where the data scientists can reasonably expect consistent payouts.“The experience of having to go through multiple burn weeks, as we saw in the last few years, is really bad,” Slyfox said. He explained that were Numerai to just stick to their guns and only have perfect, symmetric payout, many of the data scientists might not still be participating, adding that a lot of new users would likely quit if the first six weeks are nothing but getting burned.“They’re not going to think, ‘oh this is an elegant, symmetric system,’ no, they’re going to think, ‘this sucks.’” — SlyfoxUltimately, what Numerai is trying to accomplish with all of the bonuses is giving the tournament data scientists more money in a way that doesn’t break the symmetry, and that in the extreme long-term they want to end up with just a symmetric payout. Slyfox explained that when the team thinks about MMC or new tournament targets, they’re designed to be more consistent and stationary so that the payouts are more consistent. The result will hopefully be the best users who do really well in the tournament can expect more consistent payouts, making the bonuses unnecessary.Is there any study available on how many of the Numerai models are overfit based on live performance?“I’ve read that Quantopian paper, by the way: 99.9% of the backtests are overfit (Wiecki et al., 2016).” — ArbitrageAs a benchmark, Arbitrage suggested that any model that’s been active for over 20 weeks and still has negative reputation is clearly overfit. Taking MMC into consideration: if a model has negative or near-zero reputation and zero or negative MMC, it’s clearly overfit. He added anything slightly above that is probably just luck.While no formal study exists, Arbitrage has thought about what a proper study would look like, adding that it would be a little too niche for him as he’s not sure where he would publish it.Slyfox: Publish it in the forum, Arbitrage, for fame and glory.Arbitrage logging into the forumArbitrage: I’ll let someone else get that fame and glory, I need publications in finance journals.For a beginner, how does MMC change what I should be looking to aim for with my model? Am I now looking to be unique?MMC means that models should be both unique and performant. Having a high correlation model is still good for the fund and data scientists can earn money on it — bonuses aren’t the only way to make money (they just help). Arbitrage said that specifically targeting MMC might not be an optimal strategy, instead suggesting to combine performance with uniqueness as an option. “But right now, I wouldn’t advise anybody new to go down that path, at least not with your main model,” he concluded.MMC2 neutralizes our forecast against the meta model: in a world where the meta model is perfect, we should expect MMC2 to always be negative. Is that desirable?Arbitrage explained that if MMC always offered perfect predictions, the data scientists would be out of business. Numerai would have no need for their submissions because they’re not beating the meta model. “We need to be better than the meta model,” Arbitrage said, “and we need to have performance. In that sense, we need to add something to it and make everything better overall.”He reiterated that he doesn’t think a world exists where the meta model is perfect since it’s dealing with stocks: there will always be regime changes, currency risks, fraud, and multiple other reasons why the stock market will never be perfectly solved.“I’m quite happy never having perfect predictions. We’ll always be able to add signal, and no matter how many changes the team makes to the tournament, we’ll always be able to do something.”What’s the best way to introduce Validation 2 into our validation pipeline?“I don’t know, I have to see it first. I want to see how it’s structured in the data.” — ArbitrageArbitrage hasn’t planned out how he’s going to handle Validation 2 data yet, but did mention that he’ll probably add two iterations of his Arbitrage model with that data. “I don’t really plan on doing anything — I’m not going to change any of my models, and I really hope I don’t have to change any of my code in Compute. That’s my number one feature request: whatever change is done, do not change the number so if it’s column 3 through 313, leave that alone please.”Benevolent Slyfox is benevolentIf models are mostly a random walk, what value do they provide?Arbitrage’s position is that data scientist performance should approximate a random walk because the models are predicting equities, meaning it’s unlikely to find a strategy that will stay above zero for very long. He mentioned one of Richard’s forum posts about autocorrelation and checking to see if performance is stationary or not.“Hopefully,” Arbitrage said, “we’re doing a random walk and all of us, individually, are random and none of us are correlated. Because then the signal would be performant if you averaged across all of us.” Basically, each model hopefully has a period of high performance, and by averaging across all of the models and filtering out the noise, the resulting meta model should be performant.The idea is that during the periods of high performance, a model was right at that time. By building a model on top of all of the performant periods of other models, the meta model carries the edge. Ideally, each individual wouldn’t have an edge, but then Numerai would be able to extract the edge from each model.What are your ideas around a fair payout system?“Homo-economicus: we’re all rational agents of the economy.” — Arbitrage“The only reason we do something is to increase our wealth, or expend wealth to increase utility.” Instead of fairness, Arbitrage instead opted to evaluate the payout system in terms of wealth-maximization. Fair would be compensation based on effort: particularly in the early days, tournament competitors can struggle with the amount of time put into creating a model compared to the rewards. Now, though, Arbitrage expends hardly any effort because he has battle-tested models, and Numerai Compute automates the weekly contribution process, so he continues to earn based on work done in the past.“As long as my effort is being rewarded,” he said, “and I feel that I’m being compensated for the time that I’m investing, I think it’s worth doing. When that time comes that I think I’m putting in more effort than I’m being rewarded for, then I’ll exit.”Slyfox: To me there’s two games going on. There’s the tournament, which is just a game of data science, then there’s the hedge fund trying to make money in the markets. The hedge fund’s performance depends on more than just the tournament: it depends on the amount of capital we have and whether or not we can execute on that. It makes sense for those to be somewhat decoupled, and if you want to play the second game (and you’re also an accredited investor) you can talk to us about that. Not advertising, but you could ask us for more information.With the questions from Slido completed, Arbitrage carried the conversation beyond his usual one-hour limit for the first time, chatting with Slyfox and the audience.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage : follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date. And remember to stick around until the end for the exclusive conversation that doesn’t make it to publication.Thank you to Slyfox, and Michael P for fielding questions during this Office Hours, to Arbitrage for hosting, and to Zen / Nasdaq Jockey for being interviewed.Office Hours with Arbitrage #7 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 05. 07

Office Hours with Arbitrage #6

From April 9, 2020.After a series of several successful interviews, for the sixth edition of Office Hours, Arbitrage returned to his roots: no interview, no presentation, “we’re doing it live.”Changes to the tournamentBefore diving into the questions from Slido, Arbitrage started off by talking about the recent change to the tournament’s reputation calculations that resulted in models being reranked (including historical ranking). For Arbitrage, this put him in the top 25 for a period of about two months — a net positive for his reputation. He asked the audience, “What was your impression of the change?”Feelings were mixed:👍/👎Joakim mentioned that his model is still young and dropped several ranks under the new system. “I think it has a lot to do with whether or not you already have four or more weeks in a row of submissions, then you weren’t averaged across like we used to be,” Arbitrage said. “That would increase your volatility because it was based on less than four rounds.”The new reputation score is based on a weighted average of a model’s performance in 20 rounds. “I had a really good ramp-up from October through March, so for me I’m losing all of my good rounds every week. So my rank is going to decline as those rounds fall off. So I’m watching that with bated breath, if you will.”Arbitrage also pointed out that several tournament participants noticed that, on the same day as Office Hours, the tournament hit the 250 NMR per day payout cap.Author’s note: since recording this episode, Numerai introduced new updates to the payout system.Data scientist Keno expressed that it might be cause for concern among participants; someone could, for example, create multiple accounts with the same model as a way to earn more rewards without actually creating multiple models, taking up positions on the leaderboard in the process.Arbitrage noted that this is a concern, although this behavior has yet to manifest. “It’s always been a risk,” he said, “but I don’t think anyone’s using [that method] because of diversification benefits.”He then explained McNemar’s test, a way to score two models against each other to see if they’re the same model or not. The test produces a statistical analysis that says if the two models are similar. “I’ve proposed that as a way to sniff out if somebody is running clones of something, and also to prevent people from submitting the example predictions.”Not quite …Keno pointed out that, historically, once the tournament data scientists “solve” the payout structure, the Numerai team is quick to update the payout calculations. He said, “Trying to earn NMR, from my observations of others, works for a little bit but then they figure it out and say, ‘these people are gaming us,’ so they change the [payouts] … you kind of have to think ‘what am I going to do with this competition — am I going to always try to game them? Or do I just submit a model that makes sense?’”“Yeah, you’re right Keno,” Arbitrage said, “and they’ve shown in the past that they’re willing to make big moves to prevent attacks.” Arbitrage then pointed out that the tournament rules also clearly state: “We reserve the right to refund your stake and void all earnings and burns if we believe that you are actively abusing or exploiting the payout rules.”Returning to the topic of the new payout calculations, Arbitrage explained a quick analysis he performed on his own payouts. Now that the submission correlation is the payout percentage, his average correlation is 0.81%, noting he skews a little positive and that this calculation doesn’t take into consideration bonus payouts. “I was concerned that it would be skewed negative, because in the past, that was the case: the data indicated we were more likely to burn than to earn.”Arbitrage thanked Michael Oliver for joining, telling him, “Now that I’ve interviewed you, I’m going to refer to you as my Panel of Experienced Users, along with Bor.” He asked Michael if he’s done any data exploration on the new payouts system.That gang’s all hereThough he hasn’t done any exploring yet, Michael said he noticed that there’s going to be less day-to-day volatility because the smoothing window has more of a Gaussian shape, but it’s actually narrower because of the change from 100 days weighted equally to not being weighted equally. “You could expect that without the noise of the day-to-day fluctuation, you can expect to move up and down a little faster than before.” He added that the tournament docs have already been updated to reflect the new reputation scoring.“Also, I think the downside will be more persistent,” Arbitrage said, “if it’s sticky on top it’ll be sticky at the bottom.” He explained that his model is hovering around 89th place on the leaderboard and isn’t gaining higher positions despite high performance. As mentioned in previous Office Hours, Arbitrage includes validation data in his training set so he experiences higher volatility in past tournaments. He wasn’t surprised that his model is sticking in the midrange on the leaderboard, saying “I kind of expected this.”Arbitrage then shared that with expanded access to accounts (10 instead of 3), he’s testing cloned models but without including validation data in the training sets to see what the impact is on model performance, noting that it will be months before he can determine if that worked or not.🎵 So I keep on waiting, waiting on my scores to change. 🎵 — ArbitrageMichael asked Arbitrage why he thought excluding the Validation data might improve his model performance, as that would effectively be just excluding one year’s worth of data. Arbitrage countered that the excluded year could be very similar to five other years or extremely unlike a year from a more challenging period, either of which would weaken the signals from the more significant eras. “That’s my hunch,” he said.To that point, Michael explained that testing that hypothesis would entail excluding random eras, random years, or random blocks of eras to see which approach would have the most positive impact on performance. “I haven’t done it,” Michael said, “but I’ve often wondered if excluding some of these early eras from so long ago might be a good idea.”“Recency matters…… especially when you’re trying to capture regime changes,” Arbitrage said. This is one of the reasons why he was so excited about the prospect of using the Validation data as training data: it’s more recent, so likely more relevant. When he added the validation data, Arbitrage considered dropping some of the earliest eras, but ultimately decided that was a risk. His approach is that Numerai is giving participants the data for a reason and they’re not trying to be misleading, “and that’s served me well in the past… I have found that the validation data as additional training data has increased volatility of my performance. Yeah I could punch higher, but I would get punished harder, too.”Arbitrage explained that when he didn’t include Validation data in his training, his model performance was smoother, but it didn’t help him climb the leaderboard as much. “I haven’t figured out which is more profitable,” he said, “The upside is I did really well in the beginning of the year, but we don’t have enough time with Kazutsugi to really know how it’s going to play out.”Arbitrage then asked another OG Numerai data scientist Themicon for their take on the latest changes, who shared that they experienced results nearly identical to Arbitrages. Themicon explained that including Validation data in their training set resulted in huge fluctuations in their score: “when everyone else was doing well, I was doing really well; when everyone else was doing bad, I was doing really bad.”“I don’t remember who I was talking to, or when,” Arbitrage said, “but somebody just pointed out that if you do really well and include Validation data, it just means that the current live era is similar to Validation.” Arbitrage suspects that this will also apply to meta-model contribution (MMC) such that if one person trains with Validation and others don’t, the models that don’t include Validation will become performant (though it’s still too early to tell if this is the case at the time of this Office Hours).Joakim: MMC is going to be a more difficult tournament, I reckon.Arbitrage: I think so. If it’s a side pot, it’ll be fun, but I don’t know about targeting MMC. If I switch my model to MMC and other people do as well, if we stumble into the same solution then our share of MMC is going to decline. So you need to choose between stability and chasing MMC. I think it’s going to be very difficult to be performant and have high MMC over time. But I’m very interested to see how it all plays out. Especially with all of these crazy genetic algorithms that Bor is running.Joakim: It might be more valuable for Numerai, though.Arbitrage: When you think about it from a hedge fund perspective, it absolutely is more valuable for them to get 1,000 completely different but performant models, then to get clusters of 3 different types of models that are performant because then they’re really just creating a metamodel on 3 different models.This, Arbitrage said, he believed was unavoidable: because of the nature of the data, tournament participants will likely converge around the most performant strategies. But, as he discussed with Richard Craib, if a data scientist treats the features differently, drops certain features or subsets of features, only trains on certain eras, or uses different blending techniques, this is where MMC becomes a powerful anchor point. There’s probably enough variation in the data to capture MMC, but Arbitrage isn’t convinced that there is enough diversification of modeling techniques to achieve the same thing.In the chat, Michael Oliver posted a link to a linear model with almost perfect correlation to MMC:Model: KratMichael Oliver: It’s a linear model trained on a subset of eras in an automatically determined way. Since MMC came out, I’ve been really curious as to how MMC and the model performance line up. I don’t know what to completely make of it. It’s basically half of a mixture of linear regression models, so it tries to find the eras that best go with two linear regressions. Sort of a regime within the data. The fact that MMC and performance look so similar, I don’t know what to make of it, it’s just really interesting.Arbitrage: Your correlation with the metamodel is similar to what I’ve seen with neural nets. Is it just a linear regression? You probably won’t tell me more than that, will you …Michael Oliver: It’s a mix of linear regressions: it automatically parses out eras to two different linear regressions, so it’s basically about 60% of the eras.Arbitrage pointed out that some of his students have come up with ensembles of basic linear models. Their early indication is that performance will be above average, adding that 19 of his students have completed their production models and have created tournament accounts (some of them even asking if they’re allowed to continue tinkering with their models even though the assignment was finished).Questions from SlidoHow much live data is needed to evaluate how overfit a model is? Is a month long enough? How confident could one be with 12 months of live vs 12 months of validation?“One month is definitely not long enough.” — ArbitrageSubmissions to the Numerai tournament are making predictions on a month-long timeframe every week, essentially making the same prediction four times. This means anyone would need at least 12 weeks of performance history to evaluate a model. On top of that, if the model starts during a burn period, it tends to be auto-correlated, meaning it could experience four to eight weeks of continuous burn. “If you encounter that,” Arbitrage said, “you have to wait until it turns positive to see how everybody else does.”Arbitrage said he hasn’t had the opportunity to experience entering the tournament during a burn period until recently, but he would want to know: if everybody is burning and so is his model, who recovers first? He said that if his model burns longer than the top performing users, it’s an indication that his model isn’t performant.”But that’s like, not scientific at all, just kind of a gut check.”As for how confident one could be with 12 months of live data versus validation, Arbitrage said: “Not very- this is stocks, this is equities, we have no clue. Look at what happened with Covid-19, it’s a huge regime change right in the middle of all of this, and we have to hope that our models can survive regime changes.”Themicon: I’ve been [competing] since 2016, and in the beginning I was changing my model every week and it did not work. I had no idea if it was me or the market or anything like that. I’ve started getting to the point where I think I have something and leave it for three or four months before I go back and look at it. I’d rather create more accounts and try other things on other accounts. I’d say four months at minimum.Arbitrage echoed his advice to his students from back when the account limit was three: create three different accounts and over time kill the lowest performing one. Just delete it and try a new one. “If you have your own evolutionary process, similar to what Bor is doing but more manual, then you will always improve. It keeps you constantly innovating.” He added that now that the account limit is 10, maybe he would consider dropping the bottom three, but he’s unsure.At the time of the Office Hours, Arbitrage was experiencing a flippening: his model Leverage was higher than his model Arbitrage.Isn’t it?Author’s note: In the time since recording this Office Hours, Arbitrage surpassed Leverage and balance has been restored.Arbitrage never expected this to happen because his namesake model has always performed well, but now he’s thinking it needs a closer look and might warrant some tinkering. “But if Arbitrage fell to the bottom, I’d kill it,” he said mercilessly.“Like I tell my students with trading, there are no sacred cows. You have to be willing to drop something that’s not working.” — ArbitrageThe conclusion: A longer time frame and manual evolutionary process help lead to improvement over time.Is Numerai’s influence on the market itself big enough to make a drop in correlation of our models on live data due to obvious signals from trained data already utilized?Phrased another way, this question is asking if Numerai is trading on the predictions and then squashing those signals’ ability to generate profit, to which Arbitrage confidently said “no” and included another question as part of his answer: has Numerai ever revealed its yearly profit numbers or given any indication if the [metamodel] is working?Arbitrage said that he can answer both of these questions with one simple observation: all hedge funds that trade equities have to file a form 13F once they reach a certain threshold of assets under management ($100 million). Numerai has not filed a 13F, so Arbitrage suggests that we can infer it’s not a large hedge fund and therefore is not moving the market.Was Round 202 considered a difficult round?“Yes.”Arbitrage believed that this round took place when most assets were seeing high correlation: gold, bitcoin, equities in every major market, and bonds all sold off and the only asset that saw any positive performance was treasuries (which barely moved because yield was already practically zero). “When correlations are 1,” he said, “everything blows up.”Themicon: Any other eras that correlate with 202?Arbitrage: I would suggest in the middle of the training data — there appear to be some difficult eras.Arbitrage said that the difficult eras seem to be rare, and that he suspects there are models in the tournament that fit to those high volatility periods and intentionally leaving off the “easier” eras. This leads to doing well when everyone else is burning, but rarely doing well after that. “Now that the data is balanced,” he said, “it doesn’t make sense to purposefully fit to the difficult eras.” He also noted that tournament participants should expect to see eras where performance doesn’t match their perception of the market, e.g. high burn despite no clear signals of volatility in the market.Joakim: You mentioned eras 60–90 were difficult, do you know roughly what years they represent?Arbitrage: I don’t — they’ve never officially told us when the time period starts. We can only guess, I’ve just noticed that the middle third of the eras seem to be rather difficult. I wouldn’t even know how to extrapolate that back to an actual time series, and I’m not sure that it really matters.Even though Numerai data is delivered chronologically, Arbitrage pointed out that data scientists know so little about it to begin with so he’d be very cautious about trying to align the time series with any actual news or events, because that could introduce bias (which is one of Arbitrage’s least favorite things).Joakim: I’m mostly just curious.Arbitrage: Oh me too! Every time I see Richard I’m asking him every possible question I can and he always laughs at me and thinks I’m an idiot for even bothering to try, but so be it.“Lol” — Richard CraibMichael Oliver indicated in the chat that he has a counterpart model which performed well during era 202, prompting Arbitrage to wonder if Michael averaged the performance of his primary model with the one that performed during the high volatility period, what would the resulting score look like.Michael has considered that approach, but hasn’t come around to trying it yet. He explained that his counterpart model is trained on the disjoint set of eras from his other model, so they’re not quite mirror images of each other, but an attempt at capturing two different regimes. The counterpart model does perform well when everyone else is doing badly, but that rarely happens so the model overall isn’t particularly good.A model diversification strategy like MIchael’s counterpart models may have been worthwhile in the past, but Arbitrage doesn’t see the value in something like that as the tournament currently stands because ultimately, sustained positive performance is preferable to short gains.Michael then added that he doesn’t stake on this models, but finds them as interesting data points to track performance over time.Arbitrage asks the Panel of Experienced Users: are you going to spread your stakes out across ten models or are you going to stick with what you know?Themicon: I think it’s too early to say at the moment. I’ve added four more accounts with ideas that I had a long time ago that I want to try out, and I’ll leave them for the next four months and see how they do. Depending on how they do I might [spread my stake around], but at the moment I’m just sticking to my original three because I know they work in different regimes.Arbitrage: The one thing to consider is that if you are planning on staking eventually, every day that you wait, you have to wait another 100 days to earn reputation. That’s what I’m struggling with: I staked early for three accounts and staked again right in the middle of a burn sequence so I haven’t broke even yet. Let’s extrapolate out: 20 weeks in, all of your models are in the top 300, are you staking evenly on them or are you sticking with what you know?Michael Oliver: I’m definitely sticking with what works for my biggest stakes and gradually increasing stakes on things with increased confidence. If some model is looking better overall, I might switch it to one of the higher stake accounts.Arbitrage: I guess you could switch your stakes just by changing your submission files — I didn’t even think about that. That would blend out your reputation series too. That’s interesting, I have to think about that some more. I gotta stop talking out loud and giving out my ideas.What are your plans to improve your models’ performance? Not asking for secret sauce, but would be interested in the direction of your and others’ thoughts.Arbitrage said his plan is to essentially keep killing his worst performing models. He also considers volatility to be one of his parameters, so if he has a performant model but one that keeps swinging on the leaderboard, he would consider killing it just because of how volatile it is. “To me, that’s not very good.”Ultimately, Arbitrage pointed out that iterating on tournament models takes a significant amount of time so his strategy is focused more on steady growth as opposed to big incremental gains. One example he gave was having three models in the top 50 for a cumulative period of nine months. As to how he’ll achieve that, Arbitrage said he “can’t think of any way other than to kill the worst performing one in some kind of death match among my own ten models.”Are you not entertained?Themicon: Yeah, I think I’m going to do what you’ve been discussing. It’s such a long game. Keep the things that are working, and kill off the things that aren’t working after four months. That’s why I haven’t filled up my accounts yet. I have three that are working, four more with ideas, and I’ll see how those go before I start adding more.SSH (in chat): Keeping 90% in one major stake and around 10% in the other two.Richard asks in chat: How many of you plan to stake on MMC?Arbitrage: I’m in “wait and see” mode, not going to say yes or no to that.Michael Oliver: They’re going to change to MMC2 first, which we haven’t seen yet, so I have to see that first.Richard: I was looking at MMC2 and it does look a little bit more stable, from what I was seeing. I only looked at a few users, but it does seem to me that whereas you’re at the mercy of the market with the normal tournament, like you’re going to burn if there’s a burn period, that doesn’t seem to be the case with MMC. Part of me has concerns that we might get to a place, maybe a year from now, where 80% of the stakes are on MMC.Arbitrage: Why is that a concern, though?Richard: Well, it’s not a concern, it would just be strange. The tournament changes its whole character: it’s not about modeling the data, it’s also about kind of knowing what others are trying to do.Arbitrage: Oh yeah, that would be a concern. Like what I was saying about how I chase MMC along with others and we stumble onto the same solution so our share of MMC goes down because we’re correlated together.Richard: You guys said earlier that you think it’s quite volatile; it doesn’t seem as volatile as the normal returns. If you look at the black line and the blue line on the Submissions page, usually the blue line is a little bit more compressed than the black line. So it seems to me to be a little less volatile. Often, if someone has 80% of weeks up on the normal tournament, and their MMC is up 90% of weeks, so it seems like it might be quite compelling for a lot of people.Arbitrage: Yeah, but I just don’t know that I can stake on both because my MMC is correlated so strongly with my tournament performance. When my model does good I get high MMC and when I burn I get negative MMC. For me, it doesn’t offer a diversification, but maybe MMC2 does. I don’t know, it’ll be interesting to see. Any way I can reduce my risk, and if that’s betting on a side pot, that’s beneficial to me. That’s what I’m waiting to see.Slightly off topic but: what do you (or others) think will kill the project? And why do you think there’s no real competition out there?Off the bat, Arbitrage noted that a significant change to the global equities markets which invalidated all of the Numerai data would kill the project. A scenario where capital controls prevented investing in foreign markets, for example, would kill the model as it’s based on foreign equity trading. Arbitrage also pointed out the legal risks involved working within such heavily regulated industries, such as if cryptocurrency can no longer be used as a compensation mechanism. “That kind of screws things up pretty bad.”After Keno asked about competition as a threat to the tournament, Arbitrage added one more potential killer: what if one day Richard gets a call from a massive financial services company and they buy Numerai for $10 billion, then shut it down.Arbitrage: Richard’s laughing, what do you have to say Richard?Richard: Well, that’s why I have more than half the shares and control the board of the company.Richard to people trying to buy NumeraiRichard explained that he doesn’t mind investing in his own company and his own token because he specifically doesn’t want some kind of hostile takeover to happen.Arbitrage: If somebody called you and said, “hey, we’re going to give you $10 billion to buy your project,” that’s going to be a tough call to turn down.Richard: Nope 🙅‍♂️Slyfox: It’s not about the money, it’s about the vision!Arbitrage: Everyone has a number, I refuse to believe there isn’t a number that you would take to shut this thing down. Or rather, you would take not knowing they were going to shut it down.Richard: Well that’s why everything is open source, so even if someone did buy it and shut it down (which is impossible because we wouldn’t sell it) but even if they did, someone would just rebuild it with the code we left behind.Arbitrage: That’s true, with Erasure being open source the way it is, I can see that.Is live sharpe ratio versus validation sharpe ratio a good way to measure how overfit my model is?Arbitrage said that in general, yes, data scientists can use sharpe ratio to determine how overfit a model is but noted that the direct measure suggested in the question doesn’t work. A live sharpe ratio of 1 to a validation sharpe of 2 does not equal a 50% overfit, for example, because that could be the result of spurious correlation. “In general, comparing your in sample to out of sample will always give you an indication of whether you’re overfit but it’s not a direct measure.”If my model performs better or worse live compared to validation, how can I determine if it’s due to over/underfitting, market regimes, liking/disliking my model, or feature exposure?“You can’t.”Arbitrage explained that because it’s live stock data, he doesn’t believe tournament participants can infer much about why models behave the way they do. The validation data is such a small subset of the larger data set: equities change by the minute and the tournament prediction time frame is a month long. This is why Arbitrage encourages his students to take a long view and to aim for something stable.When is SAMM (single account multiple models) coming out? Can we consolidate to a single email yet?Slyfox: Yeah, it’s coming soon! We’re working on it right now. We’re slowly making those changes to our API and putting on the final touches so sign on and account creation make sense on the front end. It’s taking time to make it look good and useable, but it’s coming and it’s definitely a priority, so any feature requests?Arbitrage took the opportunity to bring up a hot topic in RocketChat: the ability to withdraw from stakes on Wednesday nights — Thursday morning (to send a reputation bonus directly to a user’s wallet or to pare down their stake). Arbitrage is an advocate and stamped it as his #1, highest priority feature request. Essentially, Arbitrage is asking for a window where his stake is not active after receiving a payout where he can choose to roll it forward or take his profit off the top.Slyfox agreed that the idea makes sense and noted that it’s been discussed internally. He said he would look into it, noting that in terms of timeline if they move forward with this, it would likely be grouped with the introduction to MMC2.Another “Feature” request: it’s been about three months since the last Fireside Chat.Arbitrage said that Office Hours with Arbitrage is not a substitution for a Fireside Chat and wanted to know when the next one would be.“I feel like we’re scheduled to have one next week,” said NJ who was fortunately on the call.Author’s note: Richard and Anson host quarterly Numerai Fireside Chats where they answer questions from the Numerai tournament community covering topics like recent changes, feature requests, modeling tips, and what to look out for in the coming months. They did, in fact, have a Fireside Chat the following week. Stay tuned for a recap from that call.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage : follow Numerai on Twitter or join the discussion on RocketChat for the next time and date.Thank you to Keno, Michael Oliver, Slyfox, and NJ for fielding questions during this Office Hours, to Arbitrage for hosting, and to Richard Craib for being utterly unwilling to sell Numerai.Office Hours with Arbitrage #6 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 04. 30

Office Hours with Arbitrage #5

From April 2, 2020The one where Arbitrage interviews BorFor the fifth edition of Office Hours, Arbitrage welcomed longtime tournament participant Bor who was not only ready to answer questions, but came with a presentation of his own. Before Bor could present, though, he had to make it through Arbitrage’s gauntlet.Welcome to Arbitrage’s fifth rodeo.Arbitrage: Bor I’d like to know, how did you discover Numerai?Bor: I don’t remember, actually. I was looking for a way to do some machine learning and find something to do while trying to learn. I knew about Kaggle before, but I have no clue how I came to Numerai. When I joined, they had this little counter going. When you submit [to the tournament], it would show you how much money you could make (until some people realized they could just scrape test data by looking at the rate that the counter was going up. It was fun… I think I made like $80 that year.”Arbitrage: Likewise. It was low payout but it was fun because it was so new and such an exciting idea. You started participating around the same time I did. Do you remember your start date?Bor: I can look it up — it nicely says so on the profile pages.Arbitrage: I just want to see if you’re older than me.Bor: June, 2016.Arbitrage: Ah gotcha; April 26th, 2016. I’m still waiting to interview somebody who started earlier than I did. So tell me, where do you live?Bor: Right now, Norway.Arbitrage: Is that where you’re from originally?Bor: No, originally the Netherlands.Arbitrage: So what do you do for a living?Bor: Uh, modeling pandemics….“😮” — ArbitrageArbitrage: Are you serious? That’s your job? That’s what you were doing prior to four months ago?Bor: I’ve been doing that for the last ten years or so.Arbitrage: That’s amazing…. So what programming language do you use and why?Bor: Clojure, it’s a [dialect of] Lisp, and it’s about ten years old now. I was programming Ruby before and I needed something faster for one of my models when I was doing my PhD. I came from C, but I had to do a lot of text processing for my PhD and I didn’t want to use the C string libraries. Ruby was really nice for that, more like Python.Arbitrage: That’s a pretty unique choice.Bor: At that time, both Python and Ruby were new, so I picked one that fit easier into my mind. I tried to get into Lisps a few times, and the third time it actually worked. So I switched Lisps to Clojure. I like the language. The syntax is just: open a bracket, function(arguments) close brackets. That’s the only syntax you have, basically, and I like that.Bor was kind enough to share two snippets of his Clojure code: the first function is for fitness and the second is to help mitigate overfitting.Arbitrage: Well that’s great because you also get to manually deal with the data while the rest of us have numerox and numerapi, and those are written in different languages. Cool — next question. Can you tell us your top three tips for the tournament?Bor: 1. Don’t just focus on the actual technique that is doing the model fitting, spend time documenting what you’re doing so you can keep track of all of your models. 2. Spend time actually evaluating your models. Now that we can have ten accounts, keep models running longer because you can learn something from that. And the third tip would be, 3. It’s very hard to learn something from just two or three weeks of a model running and live scores. That might just be one period, so be careful there.Arbitrage: That’s a really good point about keeping good notes. *With sadness* That’s probably something I should harp on, too, to know when you used such a model and what the parameters were at the time; if you changed it, when did you change it and when did that stake go on the board, so you can keep track of all that. I know I’ve flipped some of my models around: my third model used to be my second model, but I didn’t bother, I just changed the code in-line. Now the results are all mixed. Hard to keep it all straight. That’s a really good tip.Bor: I used the API to unmix the results again. When I plot something for myself, I can plot [a specific] model, even if it was on three accounts over time. I can just retrieve that model, and it’s all written down in code. There is quite a lot of housekeeping that’s good to write down in code or in documentation, at least.Arbitrage: Yeah, definitely. Who’s your favorite team member?Bor: Slyfox, right now.Arbitrage: There’s a vote! Somebody finally put their words out.Look at how happy he is.Arbitrage: Did you make it to ErasureCon? I don’t know if you made it out there.Bor: No.Arbitrage: Yeah, that’s a long trip for you. What’s your number one feature request or improvement for the tournament?Bor: So there’s been a lot of improvements lately, to the tournament. Not having to log in and out of accounts would be nice, so the multiple accounts that are coming. I’m already happy with the change that went live just yesterday. The daily scores are moving up and down so much, everybody knows that already, and Thursdays are sort of a random lock-in moment where you get paid or punished for it and that felt weird. I’m happy that we’re now back to being basically scored on the live data and the final score.Author’s note: read the latest on the leaderboard and how reputation is calculated in the tournament docs.Arbitrage: And are you up to ten models now?Bor: I’m at four. I’ve been doing, not this kind of modeling, but a different kind of modeling for a long time. What you see all these new students do is they run 50 models then look at five of them and then decide to iterate again and take up half the cluster running 50 different variants, but they only look at five of the outcomes before they realize they need to change something. I’m trying to be a bit slower in changing. Also because I’m documenting everything I do: if you do a lot of things very fast, you increase how much documentation you have to do so you have to pace a bit slower than just trying to fill up everything immediately.Arbitrage: That’s excellent advice and I think I need to incorporate that into my PhD work. I’ve got so many regressions running right now, it’s blowing my mind. So thank you, maybe I need to slow down a little bit.One thing I’ll tell the audience: Bor and I have been speaking for a couple of weeks now (I’ve been trying to get him on). I asked him if he could present some of his findings on the era similarities stuff we always see in RocketChat. So Bor, if you’re ready, you can share your screen and take it away.Clustering similar erasAfter Arbitrage handed over the virtual mic, Bor shared some of his analysis on the Numerai data.Bor: data scientist and maker of pretty pictures.The graphs that Bor shared were part of an ongoing conversation on RocketChat around how to best cluster the eras in Numerai data in effort to identify which (if any) of the training eras are similar to the live data.Numerai data scientists have speculated that the live data doesn’t match well with the training data. Bor wanted to investigate whether or not this was true. He said, “if you can make a subset of the training eras and say, ‘these are the relevant eras,’ that is an advantage to have.”Tournament participants have been trying to find optimal methods for clustering the training eras. As Bor explained, this sucked their time and energy away from tuning their models and directed it towards trying to find which training eras matched which testing eras (this is no longer the case because Numerai changed the test data set In January 2020 to include previous live eras).One way to cluster the eras is based on summary characteristics, like the average score of the features or the average score of the target in the eras. But because Numerai meticulously cleans their data, these are very small differences.Another option is to use high-dimension cluster techniques, such as every feature == 1 dimension, and to let that reduce back to two dimensions. This technique worked better when the Numerai data had 20 features: “We were all doing the same thing back then,” Bor said, “using tSNE.” Now the data set contains 310 features.Bor explained how he clustered eras based on model scores. “Rather than take a feature or something, I would say, ‘the performance of a single model in its ability to predict for this era is what I use as one axis. If you have a few models, you have a few axes to cluster on.”He discovered that regardless of which method used, some eras were more alike. “If you fit your model to one era,” he said, “you will find that there will be a few other eras that you’re pretty good at predicting as well with this overfitted model. But, you’ll be very bad in other eras.”Bor plotted the performance of two models, goodtimes and badtimes:The numbers within the graph signify specific eras.The goodtimes model represents a period of high payouts, and the badtimes model represents a period of high burn. Bor noted that Michael Oliver pointed out on RocketChat that as long as the two models are opposites like goodtimes and badtimes (have the relationship p/1-p), then performance will always distribute along a diagonal line like the one in Bor’s plot. Bor confirmed that those two models are very close to p/1-p, despite being trained on two different subsets of 18 eras.The next graph Bor showed showed the training and validation era scores for goodtimes and badtimes plotted with all of the live eras that had been released up to the point of recording (minus a few from the first few weeks of live data being available).Orange: test & validation; blue: live eras.Michael Oliver: Are your models linear or non-linear?Bor: Well ….Michael Oliver: Just checkingBor: They’re genetic algorithms so both models are ensembles of hundreds of solutions.Michael Olive: Ah okay. That’s kind of remarkable, then, for how opposite they are.Bor: Yeah, and they’re trained independently, each on a set of 18 eras. I don’t know what’s going on. And it’s not some error in the training or validation data, because I’m guessing then the live data would have destroyed itself. I can’t really overfit to that.Bor then explained that even though the nearly inverse nature of the two models effectively simulates plotting the era scores along one axis, the graph suggests that the live eras are still fitting quite close to the training and validation eras. He added that anyone can run this same analysis with any two long-running models.With those graphs generated, Bor moved on to interpolation analysis using fixed rank kriging to find out the average score and standard deviation of the models in a particular area.Pictured: interpolation using FRKFixed rank kriging is a method which, given a set of data points, attempts to figure out the spatial area that is affected by the single point at the center of each circle.Running the same analysis on two different models, Bor generated these graphs:The left graph represents average score; the right graph represents standard deviation.One of the challenges Bor faces now is trying to determine what actions to take based on analyzing models in this way. For example, looking at the standard deviation plot and trying to determine how consistent are the high-scoring eras that correspond with the same region on the standard deviations plot.Arbitrage: That’s amazing.Bor: It’s fun, but I’m still trying to figure out how to do it right and how not to do it wrong. The fixed rate kriging, for example, still has quite a few parameters to fill in.In his final slide, Bor showed an interpolation of average scores (mu) graph for one of his older models, BOR1. “It was doing quite well … but then a bunch of live eras came in to this [red] region where apparently my model wasn’t doing as well. The model was overfit, but I couldn’t see it from the validation — I could only see it when I started plotting the live data.”RIP BOR1: Jun 22, 2016 — April 02, 2020With Bor’s presentation finished, Arbitrage opened the floor to questions.Michael Oliver: I’ve tried a bunch of stuff kind of like this because ideally you want to know what cluster you’re going to be in in the live data so you can use the appropriate model. I was able to find functional clusters, but you can’t predict which one you’re going to be in (at least I couldn’t) for the live data because of the features, which is what you need to do to make this useful. Being able to say, ‘there are these clusters of functional relationships’ post-hoc doesn’t help you to predict the future unless you can get some sort of probability you’ll be in one of these clusters versus a different one. I’m curious to hear your thoughts on that and how you might go about doing that.Bor: The main thing I want to do is have my model work along the whole range of the diagonal, basically. The eras are not equally distributed across the whole space (looking at the goodtimes model, most of the eras are one-third above midway), and if you’re training your models without being agnostic to the era distribution, the models are weighted to these periods. So I’m thinking this way: when I’m now fitting my model’s selection of eras, I try to give them eras spread across the whole diagonal so I’m not weighted. That’s the way I’m using it right now.Author’s note: See the entirety of Bor’s presentation on the Numerai forum.Questions from SlidoDo you know when and how Numerai actually burns our stakes, and is there a way to see this change on a weekly basis? In other words, how is it affecting circulation?Arbitrage and Slyfox determined that this question was a perfect fit for Stephane, who readily answered.Stephane explained that there are several components to the NMR burn process. The actual burns are only put on-chain upon withdraws but are otherwise reflected in the wallet and staking balances on a day to day basis. The burns are only put on-chain when someone withdraws funds from their agreement, which causes Numerai to close out the agreement and settle the final balance on-chain, triggering the burn on the tokens.There is a difference between how Numerai and Erasure Bay handle burns. Because the contracts on Erasure Bay are one-time agreements, they enact immediate transactions where someone either withdraws or burns.Arbitrage: If I increase my stake, does that trigger the meteing out as well? So if I have 100 NMR staked, I go down to 50, and then refill to 100, does that burn get enacted upon deposit?Slyfox: It’s not just withdraws, it’s whenever you make a change to your stake, we will apply whatever changes we have in [our] database on-chain. So if you don’t make any changes at all, we’ll just continue accruing payouts and burns in the database. I’ll add one thing: why did we move to this model?In the past, when we had weekly stakes, weekly payouts, and weekly burns. This meant we had to do one on-chain transaction for every user each week. If I pay you 10 NMR, then you burn 10 the next week, then I pay you 10 again, this actually cost us a lot of money to operate. When we burn, we burn from you, and we don’t get any of that back. When we pay, we pay out of our own pocket and we have to pay gas. The operational complexity of that was getting really high as we scale.When we decided to move to daily payouts, we thought we could do the exact same thing except daily. Then I looked at our gas bill and it was almost more than what I was paying to all of the users [being paid to Ethereum in gas]. Stephane and I got together and came up with this new way of doing it.Arbitrage: Thank you, Stephane.Keno: What would I be looking for in the contract? What event logs should I be filtering for if I want to see the burn?Stephane: We have an endpoint that allows you to track all of this — I’ll give you all more info on that.What kind of hedge fund is Numerai? A fundamental data-driven alpha model seems like a good match, but what else? Counter spread? Quant? Long/short?Fortunately for Arbitrage, Mr. Numerai himself Richard Craib was on the call and was willing to take a stab at answering.Richard plotting his answer.Richard explained that Numerai is a global equities hedge fund driven by the machine learning models of their data science community. They’ve never traded anything besides equity, and they’re, “long/short, market neutral, country neutral, sector neutral, currency neutral, factor neutral… just trying to find the edges that other people can’t find and that aren’t exposed to the risk factors that other funds are exposed to.”Author’s note: to hear more from Richard on Numerai, the tournament, and the hedge fund industry, check out his OH interview at Office Hours with Arbitrage #4.Is there a difference between using R and Python? Is one better than the other? I know they should be the same, but are they? Or is one faster?Regarding computation, R and Python should come up with the same solution, Arbitrage explained. The differences are in the language syntax and what happens on the back end.“I use Python, I know a lot of people use R, and today we learned that people even use Ruby. I wouldn’t say that one is better than the other, they just have different uses. I teach Python to my students because I treat it as a Swiss Army knife. You can do just about anything with Python. I find that R is really good with time-series data.” — ArbitrageGiven the same set of inputs, R and Python should return the same outputs. The speed of either language is largely dependent on optimization and whether any given libraries being used are optimized for a task in that language.How do we avoid overfitting when we use these methods [discussed by Bor] and are these algorithms useful at all?“If they’re useful, well BOR3 is doing quite okay. I can’t tell why BOR3 is okay but it’s doing well.” — BorTo avoid overfitting, Bor explained that he uses a maximum limit of 200 features for training genetic algorithms to avoid using all 310 and overfitting. He also limits each generation of the algorithms to seeing 10% of a given era, the fitness is determined by the last 20 eras it saw, and the era is selected at random from the group of eras Bor is training on. His fitness function is the sharpe over those 20 eras minus the feature correlation of the solution, in effort to mirror what Numerai has recommended models focus on.Will Numerai offer a route for non-participants to stake on participants’ models for a fee paid to them and to Numerai?“The purpose of the staking is to see if you believe in your model,” Richard said, “so if you’re staking someone else, and you’ve never seen any code and you don’t know data science, your stake is just based on some leaderboard information… It doesn’t give us very much information.”He added that if someone is interested in NMR, they can hold NMR without being a data scientist, and if they’re not a data scientist, that’s what you can do. But regarding the tournament, Numerai wants the stakes to be meaningful and express information about the models without giving the model to them.On top of that, Richard explained that there are legal risks in trying to have the token represent the cash flow of the hedge fund. Right now, NMR is an abstraction of user performance and there are many levels between that and the performance of the hedge fund. During those stages, Numerai performs ensembles, optimizations, trade implementations, and other transformations that aren’t part of the tournament modeling.“I see it more like we’re buying signals: we’re buying data from our users and they’re staking on the quality of their data, rather than we’re investing in their hedge fund.” -Richard CraibCan you talk a bit about what feature selection and/or engineering you recommend doing? What’s a good feature exposure range?“I don’t do any feature engineering. At all,” Arbitrage said. “The data is clean, and they’ve done a really good job of smoothing out any kind of obvious relationships.”When it comes to data, there’s only one Mr.Arbitrage said that he’s a fan of Occam’s Razor: the simple explanation is the right answer. “While Bor’s presentation was mind blowing and very fascinating, I don’t do anything close to that and I think I’m well within rank of Bor to say he and I are close in rank over time.” He pointed out that their approaches are radically different: Bor does a ton to the data, whereas Arbitrage does nothing to it.“Which one of us is making more money for our effort? Well I’m going to claim that one because I don’t do anything to the data.”Along with that, Arbitrage noted that feature selection is very important (and discussed at greater length in Office Hours with Arbitrage #1). “You don’t want to over sample too much,” he said, referring to Richard’s advice that the example model only looks at 10% of features at a time. Using a small sample of the feature space per iteration is very important and helps to control overfitting. “And of course treat the eras separately,” he concluded.Feature exposure range is something Arbitrage is still figuring out. Looking at his top performing model, he noted that the feature exposure is lower than his main model, which suggests lower may be better. For his models, Arbitrage said anything above 0.08 seems too high, but he hasn’t been able to get below 0.07.What are good strategies to reduce correlation with Example Predictions and feature exposure?Don’t use the same model as the example model. “That’s going to give you a very different correlation. If you use XGBoost, you’re going to have a high correlation. That’s pretty much it.” He added, “If the example predictions are doing well, you want to be correlated; but to get MMC you want to have positive correlation but not too much.”What are good approaches to ensembles in the Numerai data set?Arbitrage suggested that any kind of ensemble will probably perform relatively well. There is a wide variety of ways to implement an ensemble, but the important thing is to still reduce feature exposure in whatever method is used.The data is encrypted — is it really homomorphic? Are some mathematical properties lost? Our models may be tricked! Is there anything to avoid?Richard: The homomorphic thing comes up so much, I think it’s a cool word. When we first launched … the homepage said ‘structure-preserving encryption’ in December 2015, but the Medium post said ‘using encryption techniques like homomorphic encryption’ and people really latched onto us using precisely homomorphic encryption schemes. Which I did try to do, and I had the data encrypted in this way, but it turned one megabyte of data into 16 gigabytes.From “Encrypted Data For Efficient Markets”Richard: The data went from normal nice numbers like you have now to very high dimensional polynomials that you had to operate on.To any normal data scientist, or even expert data scientists, it looked so weird to have these strange polynomials that you have to operate on. So I decided not to launch with that, and instead went with a different kind of obfuscation. Encryption implies that there’s a key that if you had, you could unlock it, but the data is really just obfuscated.The other important thing to note is that there are so many phases between the raw data and the obfuscated data. The raw data, you could understand, but in the middle, just the normalization stuff that we try to do to clean the data is taking away a lot of the structure of the original data. But it makes it more normal and makes eras look more alike than they would otherwise.If we gave away our normalized data and didn’t even do the final obfuscation, I think people would still be really confused about what it was. Maybe if you were an expert who had the exact same data, you would be able to tell something.Has anyone mentioned creating an app for large block trades of NMR? Similar to an OTC platform?Arbitrage mentioned that this would fall outside the scope of the tournament team and opens them up to potential risk as they can’t be involved in the market. He did add that, anecdotally, OTC trading seems to take place in London, and several organizations involved were aware of NMR.Has Numerai ever discussed what a solution to this competition looks like? Perhaps metric thresholds i.e. MMC 2, Sortino, or Sharpe through multiple regimes?Richard: We’ve been refining the problem while people are refining solutions to the problem,” Richard said. “We change the targets, and these new targets that are out now are an attempt at a better way of thinking about the problem. If you can be good at these targets, you’re really good. If you could be good at the previous targets, I would sometimes wonder, ‘Why do I prefer this model in position 100 over the model that’s coming in first?’ That’s really bad for the tournament. Even the users can tell that they could be at the top by making a bad model they would never stake.What’s true right now, thinking about the feature-neutral targets or whatever future targets are going to be, we want the situation to be that a model that was in 20th but now is 25th, well we like the model that’s now in 20th even more. And that’s because we’ve refined the problem.Ultimately, the live data is harder than the validation data so if you’ve found the solution to a great validation set, that wouldn’t be the whole answer. Things like feature exposure or other clues that we’ve noticed matter, like sharpe matters, or stationarity which we haven’t discussed much but I think is a really critical thing (where it looks like you’re playing in a casino where you have a memory-less process so your likelihood of winning next month isn’t increased if you’ve won this month). So regimes wouldn’t be a thing for your model, which is sort of what you’re talking about: you don’t see a difference between a good or bad era.It’s kind of open ended, and that’s why no one will ever really know the answer. If we knew precisely how to frame the problem and frame the solution, we could just create a neural net ourselves. But, we need people to figure things out and stake a lot to prove that they believe in them.Are there any rules for what Numerai can do with the NMR token or can they choose freely?Arbitrage noted that he imagines what the team can do with the tokens is pretty heavily regulated, and Richard mentioned an earlier post from Numerai detailing their plans for the future of NMR that details some of their allocations for users and investors.“We wouldn’t want it to be that 70% of the tokens are owned by investors who are never going to use Numerai or Erasure,” Richard said. “We think it’s very important to have that. I like the way our tokens look: there are a lot out in the community and a lot have been given away. When we sold to investors, it hasn’t been too much, and it’s often very much helped the token.”Given a long enough time frame, do you think that Numerai can “solve” the stock market?Arbitrage said no, because ultimately data scientists can’t model everything like regime changes (such as global pandemics). “Also,” he said, “we have rule changes like tip rules, stop limits, and all kinds of strange stuff that doesn’t even fall within the purview of the tournament that we’re not able to model ahead of time. But the very nature of what we’re doing is working to make the market more efficient. So in that sense, we’re partially solving the stock market. And the very nature of acting on signals that exist shrinks the profitability of those signals and for hedge funds, scale is one of the largest challenges they can face.”If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage : follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date.Thank you to Richard Craib, Slyfox, and Stephane for fielding questions during this Office Hours, to Arbitrage for hosting, and to Bor for the mind-blowing presentation.Office Hours with Arbitrage #5 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 04. 27

Office Hours with Arbitrage #4

From March 26, 2020The one where Arbitrage interviews Richard CraibIn the previous office hours, Arbitrage introduced a new segment where he interviews a member of the Numerai community or team (Numerati). For Office Hours #4, Numerai founder Richard Craib joined Arbitrage in the hot seat.Slyfox pulled a power move by having the best virtual background early on the call.As more people joined Office Hours, Arbitrage thanked Slyfox, NJ, and Richard ( “Mr. Numerai himself” as Arbitrage put it) for joining, and kicked off the call with some questions specifically for Richard.👋👋👋Could you ask Richard to come on and talk about hedge funds in general? How is the industry doing?Richard explained that hedge funds report on their performance at the month’s end- this office hours was recorded on March 26th, a few days away from officially reported data. However, Richard pointed out that there are some journalists who have reported on how hedge funds are doing and what’s happened to some of them midway through the month.“It’s quite easy to be market neutral: you just have as many dollars long as you have short. But you might not be neutral to other kinds of risks.” — Richard CraibHe referred to the often discussed hedge fund deleveraging risk, a situation where hedge funds with 6x or 8x leverage reduce to 3x or 4x in light of current market performance. This is equivocal to eliminating half of a fund’s position if it reduces from 8x to 4x.“They’re selling the good stocks that hedge funds like,” Richard said, “and buying the bad stocks hedge funds don’t like. When everyone does that on the same day, you can have very big swings. Some of the swings you’ve seen actually are probably connected with that.”The more a hedge fund can be neutral towards, the better, and Richard explained that Numerai is neutral to a lot of factors. He used the example of exposure to both value and momentum funds, something many hedge funds have and neither of which met performance expectations in the month leading up to the Office Hours.“The less you’re exposed to that, the less likely you are to be holding the same types of things other people are holding, and that’s definitely what we’re trying to do at Numerai: have a hedge fund product that does well when others do badly.”Are we able to guestimate the fund’s alpha by stake value x MMC x delta, and if so, did we outperform the market these past two weeks?This question came from Keno in the chat, and is something Arbitrage has considered. From his perspective, the data scientists have no idea how much leverage Numerai may or may not be using, which would bias the results. “Given an optimization problem,” Arbitrage explained, “it could be very difficult to extrapolate from how we’re doing into [how the fund is performing].”Richard said that signal performance is very connected, explaining that most models have parts of the integration test model. He said Numerai uses leverage about 4x the fund’s gross, but given that, he doesn’t think someone could look at an average model’s performance, or the Integration Test model’s performance, and use that to gauge how well the fund is doing. “We’re putting those signals together,” Richard said, “and then doing this whole big optimization step where we neutralize to things you guys can’t see, like being country neutral or being neutral to a specific currency, and we have to take that out.”“It’s hard to say, some days it looks correlated, but it’s not, really.”Arbitrage pointed out that several data scientists have apologized on Twitter and RocketChat; Arbitrage himself admitted not long ago that he was wrong about how market volatility would impact model performance.OG data scientist Object Science captured the sentimentAll that being said, the community still wants to know (as evidenced by how many upvotes the question received on Slido): is there a correlation between performance of the VIX (CBOE Volatility Index) and Numerai’s burn rate?Richard explained that there was a significant market drawdown which began in February, and the VIX went up during that time, but in actuality it was the hedge fund deleveraging risk (occurring during a period of high VIX performance) that had a more noticeable impact on Numerai data scientist model performance.“I think it’s a waste of time to think, ‘I’m going to put my stake up because the VIX is low and I think it will stay low.’ … Numerai is not a derivative of the VIX.” — Richard CraibArbitrage went on to note that, despite the large drops in the market, his recent models were performing better than he expected, crushing all of his prior models. For Richard, the fact that Arbitrage’s models were so drastically uncorrelated with market performance made perfect sense.Richard explained that when Numerai provides backtest data to investors, that data can’t reflect any degree of correlation to something like the volatility index. He said, “If they come back and say, ‘this is 70% correlated with the VIX,’ we don’t get money from them. It has to be 0% correlated.” By design, Numerai has taken out these known factors to make it more difficult for anyone to find a correlation.Data scientists therefore can’t tie correlation to the fund’s performance. Arbitrage asked Richard if it’s possible for individual models to end up correlated with some factor, which would then give the appearance of correlation.Pictured: correlation“What is possible,” Richard said, “is for your own models to take on feature exposure.” Arbitrage discussed feature exposure in the first and second Office Hours, and Richard’s point about models taking on feature exposure reinforced what Arbitrage said specifically about how optimizing for features can lead to overfitting the data. Richard said they were looking into neutralizing to the features, suggesting that looking at correlation against the Validation data after neutralizing out all of the features is a better way to gauge how well a model will perform out of sample.“It’s like saying, you do your own optimization and take out all of the feature exposure you can: if your exposure is .10, get that down, and if you get that down, you’re going to have a better model.” — RichardRichard explained that models with very high feature exposures, like a model fully optimized on just one feature, could conceivably be recognized as correlating to something in the market. He said that some of the features have more of a value tilt than others, so if a model is trained exclusively on one of those features, he wouldn’t be surprised if that model experienced lower performance at a time when other value investments were down. “Most models aren’t that simple, so I doubt any models are easy to read that way,” he said.Author’s note: Numerai has since released an updated version of Metamodel Contribution, MMC(2), which includes feature neutral targets.Arbitrage brought up a discussion with Michael Oliver from the previous Office Hours which was inspired by an idea from Richard: given that the new data structure has groups of features, it’s possible for a model to be trained on only one feature group and that finding performance that way could drive MMC (meta-model contribution).Arbitrage: Do you still think that’s the case? ‘Will you vouch for me’ is basically what I’m asking here.Richard: Um, I wouldn’t do that …“Nooo that’s not what I wanted to hear!” — ArbitrageWith the caveat that he personally wouldn’t train a model exclusively on one feature group, Richard said that he would drop certain features and possibly even entire groups as part of the learning process while iterating his model.Richard: “We train on a lot of trees: maybe the first hundred trees can be on all of the features, then your next hundred maybe you drop some of the features because you don’t want to be so exposed to them in subsequent iterations.”Arbitrage: “I’ll still claim you supported what I said, you just added a nice little twist to it. Still going to claim it as a support instead of a refutation. So I do appreciate that.”“Who’s the slyest now?” — Arbitrage (probably)Richard said the fund’s benchmark is the risk-free rate because they’re hedged. But what if the risk-free rate is zero? Also, that implies leverage: how leveraged?“If you’re trying to be market neutral, to have your benchmark be the market is really stupid.” — Richard Craib“If you want to have a market neutral fund,” Richard explained, “you have a long-only position where no one is allowed to have any negative or predictions below .5. If the risk-free rate is zero, we make money if we make more than zero.”Do you think that the infinite money printers all around the world will cause our models to get more or less correlated with the data? BrrrrrrrrrrrrrrrrrWith a laugh, Richard agreed that it was an interesting question and mused that the lesson from the financial crisis seemed to be that, “you can print money and get away with it because the inflation doesn’t come so you don’t pay the price.” He added that the negative effects were more likely felt by other countries, “It’s definitely good for stocks,” Richard said, adding that a small crash and subsequent rise would be better than a situation like the stagflation of the 1970’s.Would Numerai expand to futures and forex?Richard believed that eventually Numerai would expand into futures and forex, but was firm on equities being the asset class best suited for what Numerai is doing, calling them “the real game.” Because there are so many equities generating so many data points, equities are better suited for machine learning tasks, as opposed to the relatively small number of different currencies, for example.Arbitrage asks: what programming language do you use, and why?Richard: I’m not a good programmer, at all. I didn’t study computer science in college; I took one computer science class and then did a machine learning class afterwards. That class was taught in R, so I got into R from that, and my early work from 2013 uses R. More recently though I’ve been using Python because nobody else in the company uses R.Arbitrage: A little birdy told me NJ is an R fanatic.NJ: Richard made me learn R in 2015 using twotorials.com. He would go to work at this asset management company and would tell me that by the time he got home, I would have to have written some loop a million times. I couldn’t understand why my laptop kept crashing — it was a trick and I fell for it.“Sorry, not sorry.” — Richard (probably)Arbitrage asks: can you tell us your top three tips for the tournament?Richard: I think focusing on sharpe is the one thing we try to encourage. If you try to focus on sharpe, you’ll see how dangerous it can be to be maximizing the mean. If you try to maximize your mean correlation, you can make a model that has the best possible mean but right in the middle has four terrible months out of the year. What you really want is to make a model that has performance in all months. That would be a higher sharpe model, even though the mean is lower.You can say, ‘I don’t care, I want to maximize mean. I can wait out the burns.’ But you’re missing the point: the point of maximizing sharpe is also maximizing your likelihood of generalizing to new data. The live data might be dense with eras like the four months you decided not to get good at.Throwing Numerai’s data into a machine learning algorithm that cares about maximizing the mean score but doesn’t care about eras (where sharpe forces you to care about eras), that’s a big mistake people make. We have some new things coming up that will help people see that more clearly.Arbitrage asks: who is your favorite team member?Richard: Team member??Arbitrage: I’m just going down the list, I asked this question to everybody. No bias here, it’s the same list for everybody.Richard: It’s very hard — they’re very good! We have a lot of good people right now, we’ve never had a better team. Michael, who joined recently, is very good, I’m very happy with him. Anson obviously got promoted to CTO, for very good reason. I really like the team as it is now.Arbitrage: That’s a great answer and similar to what I’ve heard in the past: nobody will pick.Arbitrage asks: how many beers did you have at ErasureCon?Richard: None … I did have some of the cocktails from the concession booths. But I feel like that was after ErasureCon?NJ: You weren’t drinking during, you just wanted Diet Cokes.Arbitrage: There you go, she gave you cover. That’s good stuff.Richard: That’s our VP of Communications right there!From the Stelian: would you consider decentralizing the process of using tournament predictions to build a portfolio?After Arbitrage finished his list of questions, he opened the floor to attendees, presenting an opportunity to ask Richard questions directly. Stelian was the first to speak up, asking if Numerai would open up the portfolio management process.This follows closely with something Richard has been planning to do. He explained that the Numerai data scientists are currently given a whole vector of probabilities, but there could be another column on the upload that has a one if you want the stock in your longs and a minus one if you want it as part of your shorts, essentially making a portfolio. If data scientists have to choose 1,000 stocks, 500 long and 500 short, they might not want to choose the top 500 and bottom 500, instead opting for a more risk-neutral mix, avoiding too much exposure to one feature.Erasure Quant is set up to do two things: signal generation and optimization. But if the optimization is all machine learning, “it feels like you better tell the machine learning model what is going to happen to its signal. Otherwise, it will learn things.”Read Introducing Erasure Quant to learn more.Richard shared that the first thing Numerai is going to do about this is introduce a new version of the target (which doesn’t have a name yet). The new target is going to be a feature-neutralized Kazutsugi target. Currently, the features all have exposure to the target, so when data scientists are training models, they like to use the features because they have correlation with the target. But, as Richard explained, making a feature neutral target forces data scientists to see how good their signals are independent of the features Numerai provides.“That’s one way of getting at the same problem, where we’re at least telling you that we’re going to feature neutralize you down the process for the optimizer and the meta-model, so we might as well tell you to learn to get good at feature neutral modeling.” He expressed that this will be a continuous, ongoing effort, adding that getting the data scientists “closer to the real problem has been the story of Numerai, and this will be the next phase of that. Maybe the final phase will be this portfolio management idea, but I’m not sure if we need it if the feature neutralizing works well.”From Keno: where are we with payouts? Are you comfortable with how the payouts have transpired over the last two highly volatile weeks?In the two week period leading up to this Office Hours, the market saw a period of significant volatility which saw staked users outside of the top 100 burning sometimes up to 50% of their stake.Payout band of ±0.2. Learn more about staking and payouts in the tournament docs.Payout is a function of a model’s total stake and it’s average daily correlation, meaning that for a 100 NMR stake, a daily correlation of -0.1 would result in a payout of -50%, burning 50 of the NMR at stake. Keno’s question reflects that correlation is likely to be far off during periods of high market volatility, potentially leading to significant losses for participants.Arbitrage took the opportunity to plug #BurnInsurance (you can read more about his ideas from Office Hours #1 and Office Hours #2); Richard directed the question towards Michael, who has been working on improving the payouts system.Michael mentioned that he’s been working on small changes to the payouts system to help reduce volatility (read his latest announcement for more details). The challenge, however, is that path dependency and automatic compounding cause data scientists to be more exposed than they would want during periods of high burn. “I’m still in the camp that if you want to burn less, you can stake less or build a less volatile model that’s less exposed to the bad periods.”Keno countered with a good point, that because data scientists are working on a four-week time lag, they are unable to predict periods of higher volatility, noting that if he knew when the inflection point was coming, he would alter his stake accordingly. Specifically, the type of protection Keno is looking for is against unusual ‘black swan’ events, like a 10–20% change in the Dow Jones Industrial average.Ultimately, as Michael said, Numerai can’t predict these kinds of events either, and they’re exposed just like the data scientists: “When you burn, we burn. It’s part of the game.”Arbitrage, a long-time champion of #BurnInsurance, shared that even though he experienced significant burn, he’s up 10% from February 21st, putting him ahead throughout the volatile period. “Think of it in terms of a Darwinian experiment: only the strong survive. If you’re at the top of the leaderboard, you will survive the burn, and I think that’s an incentive to build a better model.”Arbitrage reiterated his belief that if the data scientists can’t model it, they should be compensated for some of the unknown risk, but he noted that averaging across all of the weeks has helped significantly, resulting in him recovering from the volatility shortly after the market stabilized.Richard added that it’s not because the Dow or the market dropped that users saw their models perform worse, “it’s just that the performance was bad.” He said it’s not the volatility that you can’t model, “that IS what you’re modeling — exactly that signal (whatever you’re seeing).”The next point Richard brought up is the work the team has done around meta-model contribution. He pointed out how some users contribute to the meta-model week after week, regardless of where the Integration Test reputation line is going. “Once you can bet on that,” he said, “I think all of this volatility will go away.”How MMC is formulated, from the meta-model contribution proposal:We check how the stake-weighted metamodel performs with your model included (but we pretend you staked the mean, this way your mmc is independent of your stake size).Then we hypothetically remove your model from the metamodel, and see how much it hurts/helps.The difference in these two metamodels is your metamodel contribution.We repeat this process 300 times using a random subsample of 67% of stakers each time.The mean of a model’s score across these 300 trials is their metamodel contribution.Richard brought up a post by a Numerai data scientist who shared his performance compared to the example predictions model- they were nearly perfectly matched up. Looking at that graph, Richard explained that most of the variance in that graph is explained not by what the data scientist is doing, but by the overall quality of the data. “Meta-model contribution means you can win no matter how good the data quality is, because you can always be better at modeling and contributing than other people and you can do that reliably.” This also addresses the volatility problem because with payouts based on MMC, models can perform well even throughout burn periods.From Stelian: is it possible to condition the models upon submission such that they won’t output anything at a certain volatility threshold?Stelian proposed the idea to set specific conditions when data scientists submit their models to act as virtual “insurance,” allowing the user to set some guard rails to protect against extreme volatility. He agreed with Richard’s statement that, “There’s no insurance in this business, it happens, everybody’s in the same boat and you can’t just buy insurance,” but pointed out that from a researcher’s perspective, it would be useful to have these conditions if you know that at a given time a model doesn’t perform well with high volatility. Following from that idea, Stelian asked if the features are provided in an obfuscated way to data scientists so they can make sure there’s no correlation when a model is created.“The big trick there,” Richard said, “is that the target is neutral to these things. The Kazutsugi target is neutral to volatility: it’s exactly zero. If anybody says, ‘oh it’s volatility…’ it’s not.” Richard explained that because the Kazutsugi target is neutral to volatility, changes in the VIX, for example, would not be the sole cause of poor model performance. Instead, large swings in the VIX can affect other factors, which could impact a model (depending on how those features were incorporated during training).Learn more about Kazutsugi in Numerai in 2019To emphasize his point about MMC, Richard pulled up a data scientist’s model which scored positive MMC nearly every week. He pointed out that despite the volatility of the market and the subsequent drop in performance of the Integration Test model, this model still managed to come out ahead in the majority of weeks.Model Niam from data scientist Michael Oliver.Michael Oliver was on the call, and was promptly asked, “what’s your secret?”Michael O: Trying lots of things.Arbitrage: This is the part of the show where we all talk about a lot of stuff but specifically nothing.Author’s note: learn more about Michael’s approach to data science and the tournament in his interview from the previous Office Hours.Is Stata useful or can it be used for the Numerai tournament?Considering that Stata is primarily used as a research platform, Arbitrage didn’t think it could be used for the tournament. Though he noted the latest version allows for a Python integration, possibly making the platform useful for data processing or cleaning, Stata’s inability to use tree-based models ultimately discounts it from being useful for the tournament.How well do your validation scores correspond to live scores? Any tips on getting the first to represent the second (besides no peeking during training)?Arbitrage’s second favorite topic (behind #BurnInsurance) is validation scores.“I look for a validation score between 3.8–4.4%” — ArbitrageHe’s found that if a model can get close to 4.4% without going over, generally speaking it will be a performant model, adding “your mileage may vary.”Is grieving possible on millions of predictive signals to a level that is contributing effectively to the system?Richard stepped in to answer this question: “Yeah, it’s possible now, and it will be possible with more [signals] as well.”How many predictive signals has Numerai received vs how many does it use?Richard: It’s between all of the stakers and half of the stakeless; it’s quite hard to be the average of all of the users who are staking. The stakers do perform quite a lot better than people who aren’t staking, that’s for sure.Does it make sense to use nonlinear dimensionality reduction methods in Numerai? If so, why and which are the most scalable?Arbitrage said that he doesn’t do any dimensionality reduction in his model, nor does he tell his students to do so. Because it’s a clean data set, so Arbitrage is of the mindset that Numerai is giving the data scientists data that’s ready to go, so why would he want to do anything to it? “Especially when the signal is so low,” he said, “any transformation you make risks blowing up the signal.”Michael O agreed, adding that one way to see what trees are doing is to expand dimensionality, which seems to work better than any nonlinear dimensionality reduction. He concluded that if anything, expanding dimensionality would work better before reducing it.If you were to describe the learning process for a complete beginner to semi-competitive in the competition (including benchmarks), what would it look like?As a finance PhD student, Arbitrage also teaches several courses including Financial Machine Learning with Python, where he uses Numerai data. Given that, he spends a significant amount of time thinking about this learning process.Arbitrage: I’m going to assume you’re one of my students. You’re motivated to learn coding because you realized you can make a lot more money because you paid attention in my first lecture where I showed you salary differences between those who know Python and those who don’t. So you have some basic statistics and you’re motivated to learn.You want to go from zero to hero in as short an amount of time as possible. You were smart, you enrolled at the university where I teach, and by fate you were getting your undergraduate degree while I’m getting my PhD, and you ended up in my class not because you chose me, but because you were forced into my class. By the grace of God I decide we’re going to do a coding project.What I do is this: I give you working code and ask you to modify it in place. Quite simply, if I can teach you that this works, and here’s how it works, and if you change it you can get slightly different results. As long as I can teach you those concepts, we can move fairly quickly.I’ve found that in four weeks, in about 12 hours of lectures and perhaps up to 20 hours per week of work at home (or as little as an hour per week, depending on how fast you grasp the concepts) — you can place in the top 150 on the leaderboard at the end of that 100-day trading period.Arbitrage then pulled up a model from one of his students:Not bad at all.The student, who had no prior coding experience, was ranked 119 at the time of the Office Hours.“It’s definitely possible,” Arbitrage said, “but it takes a little hand holding. The key is to be dedicated to trying to break your model. I tell my students, ‘if you’re not getting error codes, you’re not trying hard enough.’ If you send me error codes, I can help fix that, but if everything works and a model does well, that’s just luck.” Referring to the student model he was sharing, Arbitrage said, “This isn’t luck. I helped him build his own model. He picked it, he designed it, and I made sure that his code executed correctly throughout time. And he’s going to pass one of my models soon. And I’m not happy about it.”Arbitrage concluded with a resounding yes, beginners can absolutely learn how to compete in the Numerai tournament, with or without coding experience. All you need is about five weeks, a little guidance from someone with a high rank or who has been around for a while, and tenacity.Slyfox opened that question up to everyone on the call, asking how other data scientists onboarded themselves into the tournament. Data scientist Bor shared his story:Bor: I wanted to improve at machine learning if I wanted to be a scientist in the future, and I need a project to learn something and move forward. In the first years it was for coffee money only, but it was a nice start.What’s a good validation sharpe?Michael P answered this question, noting that validation sharpes are high values, with the Example Prediction model’s sharpe being around 1.5. Michael added the advice that the Example Prediction sharpe is “verification that you’re calculating things correctly, but you want to calculate your sharpe yourself on the cross validation and training set. You don’t want to rely on the validation sharpe to pick your model because the validation eras are too easy.”Richard then brought up the Validation 2 data set Slyfox mentioned during Office Hours #2. Validation 2 is a data set the Numerai team are exploring which contains the previous year of live data, meant to be used as a more robust validation data set (or additional training data as Arbitrage and Bor pointed out).Excited for new data.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage — you never know who might join. Follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date.Thank you to Richard Craib for joining this Office Hours call, to Arbitrage for hosting, and to Michael Oliver, Michael P, Keno, and Bor for contributing to the conversation. NJ, sorry you had to learn R.Office Hours with Arbitrage #4 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 04. 16

Erasure Bay Office Hours #2

From March 31, 2020Stephane Gosselin and Jonathan Sidego returned for the second Erasure Bay Office Hours, this time joined by Richard Craib, NJ, and community members from around the world including someone from Australia and a pug.Awesome turnout for Erasure Bay Office Hours #2; more joined as the call progressed.March was a big month for Erasure: the protocol passed a significant growth milestone and saw the launch of Erasure Bay, the latest application built on Erasure by Numerai (the first being Erasure Quant). Stephane started the call with the Erasure monthly update.Read the full post on Medium | Subscribe to get updates in your inboxErasure Bay is LiveStephane highlighted some of the most interesting requests users made since Erasure Bay launched earlier in the month:body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // 📣 Full video of Jeffrey Epstein deposition (&gt;10 minutes) I've only seen short clips - https://t.co/gwSnTXBy9x // @JonathanSidego paying $2000.00 // https://t.co/dwWjlRLMjA — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // NMR Fundamental Valuation Model (CSV, XLSX) from independent analyst. Include &gt;500 words of reasoning on model's logic. // @cburniske paying $100.00 // https://t.co/LptilyiHHv — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // Overview of active, large dev communities online interested in p2p/local-first, serverless, jamstack development // @dazuck paying $100.00 // https://t.co/3A89dqq78W — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // A dataset of dangerous animals in csv format -Required columns- Name, Region, Height, Weight, Text Description of Danger // @OmniAnalytics paying $50.00 // https://t.co/U0fCh2Wmsv — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Stephane noted the diversity of requests, ranging from Jonathan’s request for the full video of Epstein’s deposition, dev communities to find hard-to-Google information, and requests for art.Art from Erasure Bay, “produced under duress” as Stephane would say.Next, Stephane broke down the relationship between NMR and DAI on Erasure Bay: when a stake or reward is burned, the DAI is automatically swapped for NMR using Uniswap. The resulting NMR is burned, permanently taking it out of circulation. From etherscan:The first burn on Erasure Bay.“We’re really excited that momentum is starting to pick up. We’re seeing people track requests as they come in; we’re seeing fulfillments happen quite quickly which we’re thrilled to see. I expect we’ll see more and more burns as the platform grows, which is really exciting, too.” — StephaneNJ added that, in the days leading up to Office Hours, she noticed an increasing number of Twitter users unfamiliar to the Numerai community tagging the Erasure Bay bot in threads having nothing to do with Numerai or the topics most commonly associated with the company. “That kind of early growth is really exciting to see,” she said.“This is something we should tap into. There’s so many things on Twitter making claims or asking for things. A lot of these “ask lazyweb” questions get bad answers or troll answers. But if they ask on Erasure Bay instead and retweeted it, they would be guaranteed to get much higher quality responses. I think it’s really cool to @ them and tell them about the product.” — Stephane$1 million dollars staked on ErasureAround March 20th, the Total Value Locked (USD) in Erasure surpassed $1 million.Pictured: Richard on March 20th (probably)The Erasure protocol lies beneath the Numerai tournament, Erasure Quant, and Erasure Bay, powering the staking mechanisms for these applications. Stephane explained that the continuing growth of value locked in Erasure is driven by multiple factors: data scientists performing well in the tournament, Erasure Bay launching, and the market performance of the NMR token.Chart from DeFi PulseThe amount staked on Erasure is a good metric for gauging total user activity for applications built on the protocol because, as Stephane explained, it encompasses a variety of different activities, such as a data scientist’s stake on their model and requests for information on Erasure Bay, but also shows true skin in the game from the users. “That’s what we’re trying to optimize for with Erasure,” Stephane said.Erasure Bay community AMAThe final monthly update section Stephane walked us through was about the Erasure Bay Office Hours #1 and the feedback from the community since Erasure Bay launched.He said:“The update for February was to try to get the community more engaged because we really want Erasure Bay to be a product for the community and owned by the community such that the community owns the roadmap.”Questions from SlidoI don’t get it — in the simplest terms, can you explain what’s the point of all this?Jonanthan put it simply: Erasure Bay is a place where you can request any information. People visit the website and make a request for anything that can be delivered as a file such as a video, a spreadsheet, a data set, or a graphic.The requester locks money in a smart contract as a reward for the information. Anyone can claim that reward by fulfilling the request. Submitting a file to a request, however, requires the fulfiller to stake their own money as well, to guarantee confidence in their file; both parties have to have skin in the game. Once the requester has reviewed the file, if they’re happy with it they release the reward and the fulfiller’s stake.But, if the requester isn’t satisfied with the file, maybe it’s not accurate or detailed enough, the requester can grief their money, burning all or part of what’s at stake. “It’s a way of crowdsourcing information from anyone in the world,” Jonathan said, “and it can be any kind of information.”He continued: “The basic mechanism that can be generalized is people stake money and if you’re not happy with what they’ve staked, you can destroy their money. Or if you are, you release the reward to them.”To get an idea of what’s already been requested, visit the Erasure Bay Twitter account.“There’s so much that someone, somewhere in the world knows but there isn’t a way to credibly source it from them. Getting people to stake on it is a way to suck information out of the earth.” — JonathanI believe there are a lot of people who land on Erasure Bay and are excited about it, but also scared about the risk of putting money into something. Is there any way to provide a greater guarantee that this is a system that works? What kind of features did Numerai build to provide security for these users?The staking mechanism is designed to offer this kind of protection. Using the example of a recent request for CT scans of COVID-19 patient lungs by a data scientist hoping to build models, Jonathan explained how asking for this information on Twitter opens the question up to the world which may result in a high volume of responses, but trash responses create so much noise that the requester may be unable to separate garbage from gold.Erasure Bay requires that anyone who responds put money down on their response, and if it’s not good, the requester can destroy that money. Because of this, the people who respond are overwhelmingly more likely to not be trolls or providing poor quality information.“Skin in the game prevents bad actors. That’s the whole purpose of the [Erasure] protocol and why we don’t need a centralized party to tell who’s right and who’s wrong. The users have all of the tools they need to do it themselves.” — StephaneWill there be a chance for users who don’t have enough capital to stake to participate and make money using this platform?Looking at the Erasure Bay Twitter bot for what’s already been requested, many of the requests are in the $5-$10 range — coffee money. Jonathan mentioned a user who recently fulfilled a request earned their first DAI ever and was excited to use them. “I think this is a fun way for people to earn their first bit of crypto online,” he said, going on to add, “It will never be free, or not require a stake amount, because that defeats the point of the whole thing — you need some sort of skin in the game.”How does Numerai expect the amount staked, specifically on Erasure Bay, to grow over time?That’s the $2 million questionJonathan explained that right now, one of the biggest challenges is the friction that exists around people getting and using cryptocurrencies. He pointed out that many people aren’t comfortable using cryptocurrencies to interact with products like Erasure Bay yet. “That’s something we’re really focused on,” he said, “getting people to not be afraid to use [Erasure Bay].” This focus on user experience is a contributing factor to why the team built on Ethereum, and why the first token used for Erasure Bay is DAI — to make the platform as neutral and accessible as possible.For more on why Erasure Bay uses DAI, see Erasure Bay Office Hours #1.“I think that’s where we’re going to find more growth,” Jonathan said, “making this more usable to the person on the street. I think it has a broad application beyond just crypto people.”Stephane added that (at the time of the Office Hours), Erasure Bay was about two weeks old and already had around $10,000 staked.ACTUALLY Stephane, it was $10.063k on March 31, the day of Office Hours. Chart from DeFi Pulse.Any updates on the Erasure Grant? The $1 million grant announced in 2019?“The short answer,” Stephane said, “is that we haven’t sponsored any other teams to build applications on top of the protocol yet.” Though Stephane said this was due to multiple factors, the primary reason he cited is absence of tooling available to fully empower any other teams. In reviewing the proposals Numerai received through the grants program, they noticed a large number of the projects replicated a significant amount of work that Numerai had already done. They wanted to make sure that each grant application wouldn’t have to start from scratch and do redundant development work.Numerai used $75,000 to sponsor a hackathon with CoinList to help develop new tooling for Erasure, one of the resulting projects being an Erasure SDK that the team used to build Erasure Bay. Stephane added that they plan to release the SDK to the public in the near future.“The first step is to develop the tooling to make sure that people can be effective in building applications.” — StephaneOnce the Numerai team feels that the available tooling is sufficiently robust, they will revisit the grants program. Stephane said that the intention is to make the program entirely decentralized, so they’re exploring options to make it permissionless, removing any kind of application process. “Anyone can start building on the protocol,” he said, “and depending on the amount of usage, the amount of burn they’re able to generate through their app, they’ll get a proportion of the grant money automatically.”I cannot wait for the Tinder-like application — I think that’s my best chance of becoming rich.Erasure Bae ♥️ coming soon (maybe)The idea of a dating application powered Erasure Bay was first mentioned on the Erasure website and serves as a strong example of a use case beyond requests that look similar to traditional job board posts.Jonathan is also a big fan of the idea.“Imagine this,” Jonathan said, full of enthusiasm, “imagine a dating application where every time you swipe right on someone, you stake $5. That way, when a girl sees that you’ve swiped on her, she knows you’re not playing around and swiping a million times. You’ve shown a bit of intent.”In this example, the girl agrees to a first date, but requires her suitor to stake $500. “You know that person won’t be late,” Jonathan said, “because if they’re five minutes late, the girl can delete their money off the face of the planet.”Using the Erasure Bay model, with money at stake, the girl has some degree of protection against unseemly, aggressive, or otherwise unpleasant behavior. “I think it will fix dating for everyone forever.”From chat: What prevents the girl from deleting even if the date isn’t late (in this context)?Without skipping a beat, Jonathan replied, “Nothing prevents them from doing that.” He explained that the way Erasure Bay is set up, burning a user’s stake is not free: when a user creates a request, a punishment ratio is set. This determines how expensive it would be for the requester to destroy the fulfiller’s stake.Making a request on Erasure BayA similar approach can be used for a dating platform, meaning it wouldn’t be free for the girl to burn her suitor’s stake. The griefing mechanism is a central component of Erasure Bay — as Jonathan said earlier, it ensures both people have skin in the game. This provides a real economic incentive for both parties to behave honestly.Jonathan explained that on Erasure Bay right now, a requester can arbitrarily set the punish ratio so that it favors whoever has a stronger reputation or has the most trust in the relationship. “If I’m trying to prove that I would be a really good date,” Jonathan explained, “I would set the ratio in [the girl’s] favor.” By making it so easy for his date to burn his stake, Jonathan is sending a strong signal in his confidence that he will be a perfect gentleman and that she will have a good time.Is there a concept of ‘reputation’ that is built up over time in Erasure Bay?“Not explicitly at the moment,” Jonathan said, “reputation is the immutable ledger of griefs and fulfillment interactions you’ve had.” By looking at a user’s history on the Erasure Bay Twitter bot feed, or looking through their address history on an explorer like etherscan, someone could easily determine whether or not the user is a bad actor based on their ratio of successful fulfillments to times they’ve been griefed. “But it is something we’ve thought about,” Jonathan said.What about using the Erasure protocol for establishing a peer-reviewed paper repository in which reviewers are paid (or potentially punished) by editors?“I think this is a really great idea,” Stephane said. “This taps into this narrative that we really believe in on erasure.world. We really think these mechanisms that we’ve built generalize into any marketplace where there’s information asymmetry between the buyers and the sellers. There are so many inefficiencies in the current systems that are tied directly to the inability for the buyer to make a credible threat against the seller.”Richard slaps roof of Erasure: “This bad boy can fit so much damn grief.”Stephane shared that for those interested in the game theory behind how this works, reading about asymmetric game theory or principal-agent problems are good places to start. Many of the mechanics comprising these types of games can be solved by introducing staking and burning mechanisms like those used on Erasure Bay, Erasure Quant, and in the Numerai tournament.To learn more about why griefing works, you can also read The neural basis of altruistic punishment, de Quervain et all, 2004.“What we really want to do is make it as easy as possible for people to build these applications, experiment with different use cases, and we’ll add those to the erasure.world website and convert the site into a gallery of ideas,” Stephane said. “The goal is to display the range of use cases that the community started building.”Stephane asks: Jonathan, can you talk about erasure.world: where we’re going to take it and how we’re going to display other projects that the community builds?Jonathan said that at the moment, Numerai has built most of the projects on top of the Erasure protocol: “we made it and we want to show people what MVPs and products look like on top of the protocol”. As people build new projects and products, Jonathan said that their plan is to display them on the Erasure homepage to show what people have been up to and include metrics around development activity.If you have great ideas, share them in the Erasure Community Chat or on Twitter by tagging @ErasureBay. Or you could be really cool and request the feature on erasurebay.org.Haven’t used Erasure Bay yet? Check out this tutorial to set up your accounts and start making requests.Don’t want to miss the next Erasure Bay office hours? Follow Numerai on Twitter or join the discussion on their Telegram channel or RocketChat.Make sure you sign up below to get the Erasure monthly updates delivered straight to your inbox.https://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefErasure Bay Office Hours #2 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 04. 14

Office Hours with Arbitrage #3

From March 19, 2020.The one where Arbitrage interviews Michael OliverArbitrage welcomed Numerai tournament data scientist Michael Oliver to Office Hours, showing off the questions he prepared in advance.“It’s nothing big, just random stuff.” — ArbitrageAs part of Office Hours, Arbitrage introduced a new segment where he interviews other Numerai users, and Michael had the honor of being the first. After the short introduction, Arbitrage dove right into the questions.Arbitrage: How did you first hear about Numerai?Michael: It was a while back, I think 2016, I read an article and thought, “hey that sounds like a good idea and kind of fun.” I procrastinated working on my thesis by spending way too much time building models for the data back then.Arbitrage: I resemble that comment.Michael: *Laughs* Yeah, I found out about [Numerai] and played with it for a while, but it was taking up too much of my time. I made like, $6, so I kind of forgot about it — this was before NMR existed. I thought, “I should get my $6,” and found out I had a bunch of NMR, and that sucked me back in.Arbitrage: And here we are, right? I quit for a while then heard about the token coming out and saw I was supposed to get like, 1200 NMR, and thought that’s pretty cool.Michael: Yeah, and I didn’t find out about [NMR] until like, a year after it came out… This happened before I really understood anything about cryptocurrencies, so it forced me to learn about that stuff.Arbitrage: I’ve heard that too, from other users.Michael: Yeah I had to learn about how to use cryptocurrencies and move them around and such.Arbitrage: What are the names of your three primary accounts?Michael: There’s Niam, NMRO, and MDO (which is my original account).Arbitrage: Are you up to ten accounts yet?Michael: No, you still need email addresses and I didn’t want to make ten email addresses.Arbitrage: If you have gmail, you can just add, “+test1, +test2”Michael: Oh yeah, I forgot about that! I’ve been working on some things that I want to test, but couldn’t figure out how to do it.Arbitrage: You kind of answered my next question, “When did you start participating?” And that was around 2016 — do you remember what month?Michael: *Looking up his account* MDO woke on July 11, 2016.Arbitrage: What’s MDO’s rank?Michael: Well, it fell a lot today …Arbitrage: Yeah, we’ll talk about that.Michael: Niam is my best account and he fell a lot today too.Arbitrage: Yeah it was a bloodbath.Michael: He just got back into the top 25 staked accounts, but fell unstaked to 60th.Arbitrage: You mentioned you live in the Seattle area — would you mind telling folks what you do for a living?Michael: I’m a computational neuroscientist; I work at the Allen Institute for Brain Science, which is a non-profit research institute, and my actual job title there is scientist. I analyze data that they collect — they collect a lot of data from mice — and the data I work with is mostly from the visual cortex. I try to build models to figure out what are the functional properties of neurons in the visual cortex that we record. Basically trying to build models from the stimuli of the pixels and the stimuli of the screen you show [the mice] to the responses we record and trying to figure out what it all means.Same.Arbitrage: I don’t even know what to say — that’s amazing.Michael: It’s basically building neural networks to understand real neural networks.Arbitrage: I mean, if it’s a dendron, if that’s what we’re using, then it should work, right?Michael: I hesitate to draw too many direct analogies, because when you get into this world of building complicated, function-approximators to model complicated functions, there are many different neural architectures that can approximate the same function reasonably well. Trying to figure out what is relevant and what is not is a tricky thing. The model selection problem in science is a tricky one. One of the nice things about things like finance and whatnot is that no one cares -Arbitrage: *Laughs* Hey….Michael: I mean, especially in this competition, we have all of these features and we don’t even know what they mean. So it’s like, “Interpretation? What? Ah, who cares.”Arbitrage: That’s that I tell my students all the time: “Who cares? I don’t know what it is either.”Author’s note: in Office Hours with Arbitrage #1, he mentioned using the Numerai tournament with the students in his Financial Machine Learning course.Arbitrage: What programming language do you use and why?Michael: I use Python. I spent a lot of time in Matlab in grad school, because that’s what was best to use back then, but I was there for the takeover of Python in scientific computing. What really brought me to Python was the GPU libraries like Theano originally (which sadly no longer exists). Most recently I’ve been using PyTorch a lot, which is fantastic. Doing neural network stuff, you basically have to use GPUs if you want to finish anything in a reasonable amount of time. The GPU libraries for Python are basically the best there are in any language, just in terms of the depth of the ecosystem.Automatic differentiation in these things is amazing because you can write your crazy class functions — which are irrelevant for [the Numerai] tournament because you can write a function to optimize Sharpe, and you don’t have to pick out the gradients because it will do it for you… Things are so much nicer now than when I was doing neural network stuff in Matlab where you had to do the gradients for all of your functions by hand to make sure they’re right. This is tricky because things will work even when they’re not right.I believe there are papers in the neural network literature that were inspired by people realizing their buggy code still worked. One paper showed that some arbitrary linear transform of the gradient will work just as well as the actual gradient for training a neural network. That’s kind of weird and interesting.Arbitrage: Yeah, I’m teaching machine learning in my finance class, and it’s completely out of domain. I showed them what a binary classifier would represent: if a 1 is a dog, and a 0 is a cat, and you show it a fish, it’s going to determine if it’s a cat or a dog. And so I think we still have that kind of problem at a very base level with what we’re doing with machine learning.Michael: As a biological vision and semi-computer vision person, it’s been kind of surprising to see how well a lot of these artificial vision systems are working and it’s showing us that the problem is actually a bit easier than we might have thought. What these things are doing are really good texture classifications. They’re really good at understanding complex combinations of features and textures, but whole-object understanding is still a ways off. That’s why they can be fooled fairly easily: if you paint a car with a leopard pattern, it will classify it as a leopard with very high probability because it’s looking for these higher order features but doesn’t have any object understanding.It’s a problem because if you have the right combination of features, like a goldfish but it has some features a cat might have, [a neural network] might think it’s a cat because it has the cat features [the network] is looking for.Arbitrage: Yeah, it may not be confident, but it’s going to tell you if it’s one or the other.Michael: It might even be confident! It might just be an adversarial fish.Arbitrage: What are your top 3 tips for users new to the tournament?Michael: 1. Use eras for cross-validation, that’s definitely something that’s easy to overlook.2. Read the documentation of scikit-learn so you actually know what the functions you’re using are actually doing. For example, the `time series cross-validation` routine can take a `groups` argument, but it doesn’t use it, it completely ignores it. So if you feed the data in and use a time series cross-validation, it will not split things into eras even if you fed it a list of groups. It says this in the documentation, but even if you read that you might assume it works because it takes the argument in.Read more about using scikit-learn in the ‘Working with Numerai data and SKLearn” notebook.3. It takes a long time to validate a model. Models will be better for a while and work for a while, but I have spent a lot of time changing models based on a couple weeks’ performance which I’m not sure was the best use of my time. A model might be overall better, but can perform worse for a few weeks. You have to shoot for the long game, and it can be hard to do.It can be hard to validate your model within the data set we have because it’s not a lot of data.If you divide it up by era, it’s really 132 eras you can use for training and it’s not a ton.Arbitrage: Who’s your favorite team member?Michael: Who’s my favorite team member? You mean out of the Numerati people?Arbitrage: Sure, we’ll call them the Numerati.Michael: Well….Arbitrage: *Laughing* Look how cautious he’s being!Michael: At ErasureCon where I met you, I also met Mike who joined the team, and you guys are the ones I spent the most time with and I like both of you. I really like Richard’s interactions, I always find them interesting.Arbitrage: You have to pick one!“There can be only one” — ArbitrageMichael: I have to pick one?! It’s tricky — I wish Ralph interacted more because every time I have a conversation with him or hear him talk about something, I feel like I learned something. He knows a lot more about a lot of things than I do, so I feel like I always learn from Ralph.Arbitrage: Alright, alright, I’ll let you non-answer, I’ll let you off the hook.Michael: Yeah that was kind of a non-answer. It’s hard to pick! And of course there’s Anson (Slyfox) who’s always so helpful with everything.Arbitrage: *Laughing* How many beers did you drink at ErasureCon?Michael: *Also laughing* I kind of stopped counting.Arbitrage: Alright, maybe you can help me with this next one: how many beers did I drink at ErasureCon?Michael: It was comparable.Arbitrage: What’s the number one feature request or improvement you have for the tournament?Michael: I know a lot of the improvements I want to happen are in the pipeline, such as multiple accounts and better graphing and data visualization on the website. Now, based on a conversation I had with Mike P the other day, I’m interested in their new data pipeline. It’s apparently going to be really cool when they tell us about it, which makes me wonder if they added pandemic features or something. I feel like something interesting is coming.The reputation system still hasn’t been switched over (at the time of recording), I think it was supposed to go live on March 4th or something. But I’d rather have it done right than have to fix it.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, what are you waiting for?Don’t miss the next Office Hours with Arbitrage — you never know who might join. Follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date.Pictured: happy data scientistsThank you to Arbitrage for hosting Office Hours, and to Michael Oliver for the Q&A. Shout out to NJ for joining the call.Office Hours with Arbitrage #3 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 04. 06

Office Hours with Arbitrage #2

From March 12, 2020Following the great Q&A that came out of his first office hours, Arbitrage returned to answer more questions from the community about staking, finance, and Numerai’s data science tournament with special guest Slyfox (Anson Chu, CTO of Numerai).After a quick proof of credibility, Arbitrage dove into the questions from Slido.Pictured: proof of (school) workQuestions from SlidoCan you talk about the hyperparameter optimization you use, and the score you optimize for?Similar to Arbitrage’s three tips for performing well from the first office hours, he explained that he uses half of the training data and does a grid search for whichever model he’s using. Regarding score, Arbitrage checks to make sure his model doesn’t produce too strong of a signal as that likely means it’s overfit on the data.“In this close correlation environment, I want 4% — no higher than 4.5%.”Though this has been the sweet spot for Arbitrage, he added, “your mileage may vary,” reiterating how important it is to avoid overfitting.Once he feels confident a model is close to right, he trains on the entire training set to get exact parameters, noting that the parameters usually don’t change. Once he has that, he locks in the parameter space and adds in the validation data (132 eras at the time of recording). Arbitrage also noted that he has clones of his models that don’t look at validation, and is monitoring them to see how they perform long-term.Arbitrage invited Slyfox to share his perspective:“I’m actually not qualified to answer this question.” — Slyfox(Doing other fantastic work at Numerai, Mr. Fox doesn’t do much modeling.)I’d like to submit again on the past year data to save weeks of validation — even a little feedback, such as average rank, would be a great help. Is that nonsense?Arbitrage began by noting that the topic of historical performance frequently comes up during Numerai’s fireside chats (like this one from ErasureCon 2019).“You all have backtest data. It’s possible that you have some gaps where we could predict on something that doesn’t exist in the backtest but is former live data. It would be an out of sample validation. I don’t know how it would work in practice, but as long as it’s not in the backtest data, it is held out data and we could get some results on it. The problem is that those eras you’re looking at, in terms of average sample validation, are not like the current regime. If you fit to that, you’re going to get very bad results.”“This one I can answer!” — SlyfoxSlyfox explained that this is something the team is actively exploring, citing again how frequently community members bring up their desire for historical data. Currently he’s investigating whether or not Numerai can release previously live data (live features and the targets) as something like a Validation 2 set specifically to give the data scientists feedback.To Arbitrage’s point about previous data being part of a different regime, Slyfox said that’s exactly what they’re trying to figure out, adding, “I think it’s the future.”Arbitrage wasn’t so sure:“The tendency to want to overfit to data; it’s really easy to fall into that trap. The way [the tournament] is now, we’re so blind to everything that there is an element of luck. But with time, you do find some experience with the data like I found a range of correlation scores that works. If I had the opportunity to test my model [against the Validation 2 set], I don’t think I would change anything. I don’t think it would benefit me and I’m not sure how it would benefit anyone else.”Arbitrage mentioned that we’ve seen issues with overfitting in the tournament before, with some users hyper tuned on a specific set of eras and using multiple accounts to identify which eras are most like the current regime. “They’ll do well in the short term,” he said, “but if you were to lock those models in, they would be bottom performers.” Arbitrage expressed that he doesn’t want to keep modeling, preferring to model once, add that model to Numerai Compute, and forget about it.Phorex (community member): I never stop modelling.Arbitrage: More power to you.Arbitrage went on to explain that the intention of the tournament is to have data scientists build models that generalize well, and that for Numerai, it’s more desirable for the models to be stable, with little change week to week. If the models were to change drastically, Numerai wouldn’t be able to rely on the backtests, adding, “It’s a risk.”Slyfox agreed that the risk is present, but doesn’t want to discount the potential value in the availability of large sets of historic performance data over the long term, wondering how the tournament could benefit from this mechanism ten years (hypothetically) in the future.“Do we continue to hide that from you guys? Or do we give it out? I think it’s worth investigating.” — SlyfoxSlyfox then noted that as long as they keep a period that is truly out of sample, Numerai can evaluate a data scientist’s true out of sample performance which, from their perspective, is what matters most. Even if they do give out some of the live targets as Validation 2 data, Numerai would continue to retain a big chunk of the data as test data in order to maintain a good idea of how models are performing out of sample.As Slyfox finished, Arbitrage noticed that Daily Scores were posted so he briefly paused to give everyone a chance to check their models.“Oh! We have a winner!” Arbitrage after a brief interlude to check Daily ScoresHow extreme volatility in the global stock market is NOT translating directly into Numerai’s data scientists’ performance.“I talked to you guys about this in person,” Arbitrage said, “and I thought you were full of you-know-what.”“This makes me have to eat crow.”Arbitrage expected recent volatility in the global stock markets to have a dramatic impact on model performance, bringing up the idea of burn insurance he mentioned in the first office hours but noting that he thinks it may be unnecessary based on the latest tournament results (at the time of recording).In response, Slyfox said: “If this was back in the Bernie days, I think we would have problems, but the new Kazutsugi target, as Richard says, is much more stable and we are not that correlated with the overall markets — certainly not just the US [market].” He then reiterated that the targets in the Numerai data are based on global markets, adding that they’re certainly not limited to longs only.Author’s note: Numerai names the targets in their data sets. The targets are Kazutsugi (current at the time of writing), preceded by Bernie, Charles, Elizabeth, Jordan, and Ken.Example Numerai data with the Kazutsugi target column at the far right. From the Introduction to Numerai.How do users tune hyperparameters?Michael P (engineer/coolguy) joined the video call to expand on the first question regarding tuning hyperparameters as a user, specifically what form of optimization to use and what score to optimize for.Starting with what to optimize for, Michael said that for maximizing payouts, the data scientists are incentivized to optimize for mean correlation. He noted, however, that they do find it useful to optimize against something like a Sharpe ratio, taking the mean of a user’s error scores and dividing by the standard deviation of the era scores. This, he said, leads to much better consistency and lower risk (specifically out of sample).Michael mentioned he would try to get boilerplate code for this optimization into the Numerai tournament tips and tricks notebook.From the tips and tricks notebook.Arbitrage to Michael: Now that you’ve had more time to look at the data and the users, have you noticed any interesting trends?Michael has been closely watching the meta-model contribution (MMC) scores, specifically to see how data scientists have changed their models since Numerai made that announcement.“In the first week or two, people were changing their models to get away from these tree-based approaches, but lately it’s been converging on integration-test type models because MMC is encouraging that. I think there’s going to be a short term where it is more correct to go to these safer models, where everybody is doing good and MMC will discourage bad models. Once everyone converges on the easy approach, it will open up and start being more profitable to diverge from that and come up with more creative ideas.”Any thoughts about optimizing feature exposure methods?Arbitrage echoed his earlier point about being afraid of accidentally overfitting. “If you take Richard’s advice and treat the eras properly and not over sampling the data, you can control for that naturally.” Considering the tournament leaderboard, Arbitrage noted that he hasn’t seen any model with lower than 8% exposure that’s also performing well.Michael added that feature exposure is included because, in their research, the Numerai team has seen a strong correlation between low feature exposure and high performance on the leaderboard.“But it’s not for free,” he said, “you can’t just post something with 0% feature exposure and expect it to perform, it all has to be going in the right direction. It suggests that the models are looking at something a lot deeper than just linear exposures. They’re more robust and don’t degrade as much over time.”“If you have two models that are close in performance but one has significantly lower feature exposure, you might want to consider using that one.”Arbitrage pointed out that there are groups of features, and he would be interested to see how a meta-model based on an ensemble of multiple models, each trained a group of features, would perform. His suspicion is that it would overfit, but said he’s curious how it would perform.Michael mentioned Numerai data scientist Lack of Intelligence, who chose that name because they don’t use any of the intelligence features. He said that the features are grouped for a reason: they are related and not just randomly grouped. Arbitrage added that Richard told him it would be interesting to see users completely ignore a set of features, and if they still performed well, it would be very unique for the fact that it wasn’t looking at those features.Numerai data feature groups. Roll 2d10 to see which feature you overfit to.Arbitrage asks: how’s MMC coming along?Slyfox: “We’re still in the phase of analyzing how people are reacting to [MMC] and doing internal research. We’re designing a payout structure for it, but we don’t want to rush it. The last few changes to the payout system stirred up backlash, so we want to do it right and we don’t mind taking it slow. We want to do the minimum amount of change, a 20% change for 80% of the effects.”He gave the example of releasing the MMC scores without the payouts as a first step to get the data scientists thinking about MMC in the right way. “Similarly to multiple accounts,” he said, “where we loosened the rule before shipping the feature so we can get most of the benefit before we have to enact the change because we know every change is disruptive.”“In short,” Slyfox concluded, “we’re definitely doing it but it’s not going to be soon.”Arbitrage asks: Michael mentioned an internal priority is moving multi-account to one account, any updates?Very proud of the name he came up with, Slyfox introduced ‘Single Account Multiple Models,” which Arbitrage dubbed, SAMM.Pictured: Numerai data scientistSlyfox gave a shout out to Numerai team member Patrick who is taking the lead on this project, adding that it’s making good progress. From Slyfox’s perspective, there’s been an increase in new accounts created, which comes with the need to properly set up multi-factor authentication and IP verification (an admittedly cumbersome process). Slyfox and Patrick are both primarily focused on shipping SAMM so data scientists can easily manage multiple models “without going crazy,” as Slyfox said.Michael added:“I kind of overlooked it at first but I think it’s a prerequisite for real MMC — the ability to experiment with different models without having to stake on them yet, but still getting MMC scores and seeing how correlated they are with the meta-model, is going to work in conjunction with MMC to create a better environment for iteration. That’s a big part of why it’s prioritized right now.”Arbitrage has a feature request for SAMMAs someone managing six accounts already, Arbitrage expressed that he doesn’t have the time to create more. But, still wanting to take advantage of multiple accounts, suggested a feature to adjust the weight of different models within a SAMM account.Arbitrage explained that a data scientist with two accounts could want to create a third account with the first model weighted 70% and the second model 30%, giving users a new vector for improving their performance without the need for starting accounts from scratch.Slyfox had been down this road before, however, and had Richard’s counterpoint ready — would data scientists trust Numerai to blend their predictions correctly? Or would they not blend the predictions on their own before submitting a model to the competition?Proper blending is key“I think it would be a fun feature for the future,” he said, “but right now our focus is on the most basic case of making sure you all don’t have to keep logging out and back in.”Arbitrage asks: what’s the next step in the evolution of the data set?Arbitrage noted that there’s significantly more tournament data now, but the training data has remained the same, wondering, “when do you think that will flip?”The next possible change to the tournament data sets Numerai is exploring internally is releasing more validation data- the Validation 2 data (historical live data) that Slyfox mentioned earlier in the discussion.Alongside that, Slyfox said they continue to monitor the ever-growing test set to make sure it doesn’t push data scientists past memory limits. Following from that, one idea Slyfox mentioned is to explore different file formats such as HD5 or Parquet. Much of the Python tooling, like Pandas, already supports these file types and they’re faster to transfer and cheaper to compute.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, take a minute to sign up.Don’t miss the next Office Hours with Arbitrage: follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date.Thank you to Arbitrage for hosting Office Hours, and to Arbitrage, Slyfox, and Michael for collaborating on this post.Office Hours with Arbitrage #2 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 04. 01

Erasure Bay Launch — $1M St...

Erasure Bay Launch — $1M Staked — Community AMAThere is now $1M staked on Erasure. Follow @ErasureBay on Twitter to see new requests. Post your questions for the Erasure Bay AMA.Art requested on Erasure Bay🚀 Erasure Bay LaunchThis product launch has been highly anticipated and we are thrilled by the community response. Read the launch announcement on CoinDesk.Erasure Bay is an unstoppable marketplace for information of any kind. In a few clicks you can make a request and the Erasure Bay Twitter bot will broadcast the request to the Internet.We’ve seen requests for a range of use cases including art, hard to find data, independent research, contract work, and bug bounties.Here are some of our favorites:body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // 📣 Full video of Jeffrey Epstein deposition (&gt;10 minutes) I've only seen short clips - https://t.co/gwSnTXBy9x // @JonathanSidego paying $2000.00 // https://t.co/dwWjlRLMjA — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // NMR Fundamental Valuation Model (CSV, XLSX) from independent analyst. Include &gt;500 words of reasoning on model's logic. // @cburniske paying $100.00 // https://t.co/LptilyiHHv — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // Overview of active, large dev communities online interested in p2p/local-first, serverless, jamstack development // @dazuck paying $100.00 // https://t.co/3A89dqq78W — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}WANTED // A dataset of dangerous animals in csv format -Required columns- Name, Region, Height, Weight, Text Description of Danger // @OmniAnalytics paying $50.00 // https://t.co/U0fCh2Wmsv — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Art produced under duress on Erasure BayThe first burn performed on Erasure Bay was against a Numerai contractor who was late to a few meetings:body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}GRIEFED // @thegostep burned $10.00 of @internet_cream's stake // https://t.co/Lfg7Wfq3Fm — @ErasureBayfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Burning is required for the marketplace to function correctly. It allows for the community to form norms about quality and appropriate behavior without reliance on a centralized authority.When a burn is performed, the DAI from the reward pool is automatically swapped for NMR on Uniswap. The NMR is then provably destroyed by removing it from the total supply forever.etherscan.io/tx/0x0cadb065c62bb3eb948a07d63999ba9356845a0cdb07935fb62d98f26025127f🤑 1 Million Dollars StakedErasure has passed the $1M staked milestone this month. This is driven by strong user performance on the Numerai Tournament, the launch of Erasure Bay, and market performance of the $NMR token. Amount staked is a great way to measure the amount of user activity on the protocol.defipulse.com/erasure🙋 Erasure Bay Community AMALast week, we hosted the first Erasure Bay office hours. Post your questions here for the next AMA. You can join the AMA live this Tuesday morning at 10.30am PT. Zoom link will be shared in the Erasure Community Chat morning-of.Here are some highlights from last week’s AMA by Anthony where we answered questions like “Why did Numerai build Erasure Bay?”, “What are the plans for governance of Erasure protocol?”, and “What went into the decision to use DAI for payments and staking”.Erasure Bay office hours #1We received some great user feedback during the last two weeks. Here are some feature requests we are excited about:Support for other tokens like NMR and USDC in reward pool and stakeA more intuitive way to select request parameters like punishment ratioAbility to make an exclusive request that can only be fulfilled by a specific personAbility to reveal the submitted file publically to allow anyone to verify itAbility to negotiate request parameters or fulfilment contentAs we said in our last post, this product is now in the hands of the community — your feature requests and feedback have a direct impact on the roadmap for Erasure Bay. If you have great ideas, share them in the Erasure Community Chat or on Twitter by tagging @ErasureBay.If there is a feature you really want, you can even request it on erasurebay.org.JobsWe are on the lookout for elite talent to help our mission to fix the Internet.Software Engineer - Head of DevX @ ErasureCommunityGeneral: Telegram / Rocket.Chat — Technical: Telegram / Rocket.Chathttps://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefErasure Bay Launch — $1M Staked — Community AMA was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 03. 30

Erasure Bay office hours #1

Erasure Bay: Office Hours #1Numerai expanded their popular office hours video calls with a new topic — Erasure Bay. On Tuesday, 3.17, Stephane Gosselin and Jonathan Sidego opened up a Zoom conference for the community to come chat about the recently launched Erasure Bay.For those who couldn’t make it, here’s a recap:Stephane asks Jonathan: why did Numerai build Erasure Bay?Kicking things off, Stephane asked Jonathan about the genesis of Erasure Bay and how the application came to be. “Sourcing information is a really good use case for staking,” Jonathan explained, “because traditionally if you wanted information from someone, you could only see that information after you’ve got it from them. So getting people to stake on it, where you can burn [the stake], is a way to start getting reliable information from people you don’t really know too well.”Where traditionally people would seek out information only from reputable sources, Erasure Bay offers a way to (in Jonathan’s words) tap into the long-tail of humans around the world, the “lazyweb of people who know things”. Staking has worked as a quality mechanism for Numerai in the past, as part of their data science tournament, and in creating Erasure Bay, Numerai is building a generalized version of that mechanism to apply beyond stock market predictions.Along with Erasure Bay, Numerai launched erasure.worlderasure.worldThe Erasure protocol’s new homepage, erasure.world briefly introduces the protocol and describes what the Numerai team believe are potential use cases.Example: staking for online dating where you can burn the other person’s stake if they ghost you, show up late, or turn out to be super rude.Jonathan neatly summarized the message of the Erasure website and one of the goals of the protocol: “Tying a little bit of money, through staking, to online interactions is a great way to get more honest, real, useful information.”Stephane added, “There’s this whole vision of a future world where everyone is tied together by economic incentives. It seems like that was the promise of a lot of early DAOs and things, but it never really came true.”How does a hedge fund end up building Erasure Bay?For all it’s blockchain and data science, Numerai is a hedge fund.See?How does a hedge fund end up building something like the Erasure protocol or Erasure Bay? As Jonathan explained, it actually makes a lot of sense. Numerai has developed a lot of ways to source information, and along the way refined ways to filter that information for quality.“We tried a lot of ways to get honest predictions out of people, and to make sure Numerai users aren’t overfitting the data we give them to make predictions. So we iterated through a lot of different strategies for getting good information from our users and staking was like, far and away the best way to get people to have skin in the game and provide honest information. It really worked out well for Numerai, and we thought, ‘this could be amazing for the world.’ The fact that other hedge funds and companies haven’t really successfully crowdsourced, and I think we figured out a way of crowdsourcing that the rest of the world would do well to implement in certain ways.”Questions from the communityIn advance of the office hours, Numerai shared a Slido link on Twitter and in their Telegram channel for community members to leave questions. After the intro by Stephane and Jonathan, the Q&A began.Have you considered the possibility of creating sub-branches of Erasure Bay for different kinds of information? Zero-day attacks, memes, written reports, etc.While the team did consider many different applications, Jonathan explained, for the first version they wanted to make something as broad as possible. “v1 is kind of a broad-ish implementation,” he said, “and I think we might keep it broad and not too opinionated for a while and let users discover the different ways of using it.” Jonathan did note that, should strong verticals appear, the team might consider tooling and UX changes to support those verticals.Stephane added that Erasure Bay is completely open source, and was designed to be easily replicable. A community member could fork Erasure Bay and, while still operating on top of the same protocol and data, re-implement the front end to customize how it presents information to users. Stephane gave the example of filtering out all requests not related to infosec to create an infosec-specific Erasure Bay.Jonathan also brought up the Erasure Bay Twitter bot which Tweets out requests posted to Erasure Bay, noting that it presents an opportunity for people to organize around different keywords or hashtags for specific requests.Will we be able to use just MetaMask?Currently, making and fulfilling requests on Erasure Bay requires that a user have a Twitter account and an Authereum address. Initially, Authereum was selected because the service could connect to Erasure Bay without requiring a browser plugin or extension, allowing the most people to use the platform across devices (including mobile), which was the team’s main goal. Authereum also allows for built in Wyre support, letting users convert fiat to crypto within the application. Jonathan noted that using something like WalletConnect to support other wallets is on their road map, but not in the immediate future.It’s an interesting trade-off because on the one hand, lots of people have been using MetaMask for a long time and are familiar with it, and sort of forget how difficult it is to use at first. For a new user that’s completely new to crypto, it seems like a very complicated app where you have to confirm a bunch of things and don’t really know what you’re seeing, and so it’s a more difficult experience for them. And then, when someone’s already familiar with MetaMask, transitioning to Authereum is like, ‘whoa, are we taking a step back here? Username and password? I thought we were just going to do private keys?’ It’s a bit of a balance. We’ll see how things develop, but the goal is to provide the best UX.-Stephane on Authereum and MetaMaskAny new features coming to Erasure Bay?Right now the focus is entirely on getting the current iteration as polished as possible. “We’re going to let the users and community lead the way, and we’ll follow them in how they decide to use [Erasure Bay] and support features that users want,” Jonathan said, adding, “We’re open to so much feedback at the moment so join our Rocket Chat or Tweet at us with any ideas you have.”Why is it necessary to have a Twitter account to be able to use Erasure Bay? Doesn’t that make it less accessible?Jonathan explained that they decided to use Twitter for two reasons:Bootstrap reputation: if someone is making a request for some information and all you can see is the wallet address, there’s no confidence in interacting with that user who might end up griefing you (burning your stake) unfairly. If it’s someone fulfilling your request, you don’t know if it’s going to be real information. Attaching a Twitter account lets people see that it’s a real human, they’ve got some followers or a verified check mark.Means to interact with your counter party: when fulfilling a request, you can see who made the request and reach out on Twitter with any questions or possibly negotiate.Read more about Twitter in the Erasure Bay docs.Tweets on the Erasure Bay homepage are not the most recent — can you solve that?Erasure Bay main pageThe Tweets displayed on the Erasure Bay main page are currently manually updated by the Numerai team. Initially the feed showed the most recent Tweets from the Erasure Bay bot, but the team decided to switch to a curated list of interesting requests while they improve the automation.What are the plans for governance of the Erasure protocol for token holders?Stephane shared that while governance processes are popular ways for projects to get their communities involved, he hasn’t seen a proposal that seems like a single best way to approach governance. “All of the proposals have a lot of question marks, and it’s a speculative way to design a platform,” he said.Instead of relying on governance for decentralization, Numerai designed their smart contracts to be completely decentralized from the start, with users opting into any upgrades through the applications they use.This means that if a new release of Erasure protocol comes out, it’s up to app developers like Jonathan (developing Erasure Bay) to recognize if that new release has the features their application wants or needs. As Stephane said, “the protocol developers don’t retain much power; they can ship updates to the chain, but it’s completely up to the users of the Erasure protocol to decide if they want to use a newer version or not.”What will the core focus for Erasure Bay be over the coming weeks/months?Jonathan and Stephane both agreed that the core focus in the short to medium-term is on growing the community of Erasure Bay users. They want to get people using it to see how they use it, which will inform how they move forward with the project.Stephane noted that while they want to get as many new users as possible, they want the community to grow organically as use cases emerge. “We have a couple of cool ideas of what we can do,” Jonathan said, “but I don’t want to force a bunch of features on people, I’d rather see what the people want.”Will the Numerai team use Erasure Bay for the development of new Numerai products?Numerai (the company) essentially has two core components: the hedge fund operating with the business model of continuing to run the tournament, and the protocol side focused on empowering people to build their own applications with their own business models (and using the same primitives as the data science tournament but in different applications).The protocol team is incentivized to build applications that demonstrate what the Erasure protocol can do. “Erasure Bay is a great example of this new kind of product that couldn’t exist without Erasure,” Stephane said, “which is why we were really excited to build it ourselves.”Stephane added that if they come up with other example products like Erasure Bay, they would consider building them, but the goal is for the community to be empowered to build. To that end, next steps are shipping an SDK and open sourcing the Erasure Bay front-end.Can you expand on why you decided to use DAI while burning NMR in the background?Stephane explained that the decision to use DAI, with NMR in the background, was debated internally for almost a year, and is based on a simple theory: if they want Erasure Bay to be something that’s usable by the masses, payment and staking with a volatile token doesn’t make sense. “You have a completely different risk profile for users that are willing to tolerate [token] volatility versus those that are not.” The decision to use a stablecoin for staking comes from a place of working towards broad market appeal.Stablecoins, however, are not designed to be burned, a necessary component for making the economics of Erasure Bay work. In contrast, NMR has a fixed supply and is meant explicitly for staking and burning, making it an ideal complement to the stablecoin at the foreground of Erasure Bay.Other than DAI, each stablecoin has some kind of compliance mechanism through whatever jurisdiction is issuing it that allows for withdrawals and cancelling or censoring transactions. DAI, on the other hand, is the only stablecoin currently that doesn’t have those centralized controls, making it the most attractive option from the beginning. Despite the challenges around collateralization DAI is facing, Stephane expressed confidence in the stablecoin’s future (but noted that they will be ready to switch to a different stablecoin should that be necessary).The first Erasure Bay office hours chat was lively and informative —Jonathan and Stephane walked us through a lot of the nuances and details of Erasure Bay not necessarily apparent when using the application, but all of which contribute to the slick UX.If you haven’t used Erasure Bay yet, check out this tutorial to set up your accounts and start making requests.Don’t want to miss the next Erasure Bay office hours? Follow Numerai on Twitter or join the discussion on their Telegram channel or Rocket.Chat.https://medium.com/media/f4a7419a9f702a775d529aa6276e806f/hrefErasure Bay office hours #1 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 03. 24

Office Hours with Arbitrage #1

From March 5, 2020.If you’ve ever entered Numerai’s data science tournament or you’re active in the RocketChat, you’re probably familiar with Arbitrage. A third-year PhD Student studying finance and focusing on Fintech and Bank Distress, Arbitrage has been involved with the Numerai tournament since 2016 (often ranking in the top 100 users), and teaches several finance courses including Financial Machine Learning with Python where he uses Numerai data.In early March, Arbitrage held his first office hours for users to talk about the tournament at a high level and get help and feedback on their models. Here’s a recap for those who couldn’t attend or want to revisit any of the discussion.Questions from SlidoArbitrage began by answering questions from Slido, which was shared in advance of the office hours to collect questions from the community.Can you summarize the circulation changes to Numeraire since the beginning and talk about current float?Originally, there were to be 21 million Numeraire (NMR). The initial distribution included about two million tokens, of which not all were shipped and many were locked up, making this difficult to track over time. The biggest change came in 2019 when Numerai reduced the maximum supply to 11 million NMR, dramatically shifting the possible circulation.Learn more about the change here.Many of the tokens from the initial distribution went to public float, and a portion being locked because they were never withdrawn (or someone lost their keys). Arbitrage also pointed out that a lot of tokens were burned, noting that because of all of this, he’s unsure of the exact amount of NMR in public circulation but noted it’s nowhere near the maximum.Learn more about NMR numbers at https://numer.ai/nmrCan you explain more about your suggestion for burn insurance?“Since we’re blind to the data, we have no way to control our model’s systemic risk. So if we can’t model for it, how can we be liable for it?” Arbitrage’s point is essentially that if the ultimate goal is model stability, there should be a mechanism to help protect against volatility.Pictured: another argument for burn insurance.Changes in the data, he said, have reduced volatility over time and have already done a lot to address this issue, citing the differences between his previous training on the validation eras compared to his current model performance (noting that he hasn’t changed his actual model much).What top 3 tips can you share for performing well on live data?Don’t forget the eras. Arbitrage suggested making sure your data is structured to account for that, such as averaging across different groups of eras. “The key here is to think about it in terms of stocks,” he said. “If we presume that an era is a month, and within each month we have a set of stocks, and we have 120 training eras, then it’s possible that we have 120 observations of a single security. If that’s true, we have to treat that differently, we’re not able to use panel methods because we can’t tag the ID’s. So it would be bad to have all 120 observations in a single pool because we would overfit on the data, and that’s why we treat the eras separately.”Don’t just use Validation as a holdout set. Arbitrage recommended using a much larger holdout set, potentially even half of the data. “When I started out, I didn’t touch the validation data. I just split the data right in half and used the first 60 eras as training data, found the parameters I thought were good, and then tested on the other half of the training data. It was grossly overfit. Then, I trained on the first two-thirds of the training data and readjusted the parameters and iterated that way, adding a bit more each time, and it wasn’t until I thought I had it that I peeked at the validation set.”Look at the correlation scores. Over time, Arbitrage said, between 3.6% and 4.4% correlation seems to perform well on live data. He added that based on his observations, anything over 4.8% is likely to be overfit and return poor results.Can you prove that Numerai can use high MMC (meta-model contribution) predictions in ways that add value to the hedge fund (without hand waving)?“Apparently I’m a hand talker.” -ArbitrageBasically no. Because Numerai is using a bootstrap method to calculate MMC, so they’re doing repeated sampling. If all of the models were present, clusters wouldn’t matter as much as individual models along an efficient frontier; any individual model that created a portfolio that fell on the efficient frontier would probably be selected.However, Arbitrage also noted that this would most likely lead to only five or six models contributing to the meta-model which isn’t good. “That’s why I argue that similarity isn’t a bad thing- if anything, it boosts the significance of that modeling technique.” As an example, he said if he was the only person who had a model in a cluster with a very high MMC, that doesn’t mean his model is any good (just not correlated). But, if Arbitrage exists in the cluster with, for example, other high performing models, that validates his methodology works.“I think it’s important that we have 30- 50- 150- 200 people all using the same methodology but slightly different because the benefit is that we’re independently validating the methodology and staking on it separately… without that we just have a hodgepodge of users doing well randomly.”As a final point, Arbitrage added that the current configuration of MMC is very robust, but doesn’t give data scientists the ability to see what any given model is directly contributing to the meta-model, it’s only an estimate.Author’s note: successfully answered without hand waving.Can you explain what you think Numerai assets are (market, investment type, etc), by taking into account what we know over time?Numerai has already disclosed that they trade global equities, Arbitrage said, noting that this information has been public for a while. Based on how the tournament data is structured, their evaluation period is long (otherwise the models would need to be checked every second, which would require significant computing power beyond what most individuals have access to).Any intuition why regression seems to work better than logistic targets or multi-class classification?Arbitrage intrepidly admitted he doesn’t know, suggesting it’s most likely a product of how Numerai sets up the data.“I’m just glad it’s a clean data set.”As to why the data no longer has a classification target, Arbitrage speculated that, “what they’re really looking for is for us to put a number on it; a continuous measure of what we think a stock will do,” adding that correlation seems to be working well.Is there any promising field, theory, algorithm, or approach you think will be useful for Numerai-like problems?Though empirical research in finance (Arbitrage’s area of focus) is vastly different from algorithm design or machine learning, he noted that dynamic asset price theory is intellectually interesting (if unrelated to the tournament).If we’re collectively analyzing how capital can be deployed into markets in effort to ensure that anyone entering a market can get a fair price, Arbitrage explained, that makes sense. However, the “drive to find an edge” is very difficult, “and that’s why I think this is the most difficult machine learning challenge in the world — because we’re all fighting for a half to one percent edge that disappears frequently.”Ultimately, though, Arbitrage said that within his spheres, there really isn’t anything new that would be applicable to Numerai-like problems. He explained that no new asset classes have been introduced into finance since around the 1950’s, with cryptocurrencies as a recent exception (noting as well that derivative assets might qualify, though fundamentally they still represent intrinsic value so the primitives are the same as older asset classes).Since we know little about the assets or how predictions are utilized, why do you think there are historical limits on the hedge fund’s earning capacity?One of the limits for hedge funds is scale:“When you find an edge, trading on it collapses that edge because you’re making it more efficient. When you find an edge in finance, you are therefore finding something that is an inefficiency in the market and by trading on that inefficiency you are closing the profitability of it.”This presents a problem for hedge funds in finding a strategy that scales, which is absolutely true for Numerai as well. One of the challenges for Numerai specifically, Arbitrage said, will be around having enough stability in their investment thesis to justify getting assets under management. Arbitrage believes Numerai has been relatively stable because the time frame the tournament deals with has been consistent — the data scientists have always had a one-week lag in results and a one-month time frame.Other challenges could come from changes in securities law somewhere that impacts the data set or another hedge fund picking up on similar signals and trading against them, neither of which could be known in advance. “That’s why there’s so many training features in the data.”Numerai training dataA hypothetical example would be if one of Numerai’s columns represented exposure to some asset, and lots of other funds start trading against that column, it’s value is diminished and it won’t have any signal. If Numerai data scientists are overloaded on that column, they’re going to get bad correlation scores because it won’t be profitable (which is why it’s so important not to overfit on any column of the data).Arbitrage also noted that Richard said Numerai plans to introduce more columns in the future, but that they won’t be necessary to train against because the current data still works.How much does Numeraire’s volatility affect our models’ effective sharpe ratio?“There isn’t really an easy answer to that.”In a hypothetical example, Arbitrage explained that purchasing 100 NMR at $10 each would result in a $1,000 fiat-equivalent stake. Should the value of NMR drop by 50%, even if a model generates a 10% return, the data scientist is still operating at a loss. The risk, he said, lies in fiat-equivalency returns which is not easy to mitigate (with one possibility being the burn insurance discussed previously).The challenge is how to reduce volatility, encourage participation, and make payouts proper incentives while also being Sybil-resistant.If you’re passionate about finance, machine learning, or data science and you’re not competing in the most challenging data science tournament in the world, take a minute to sign up.Don’t miss the next Office Hours with Arbitrage: follow Numerai on Twitter or join the discussion on Rocket.Chat for the next time and date.Thank you to Arbitrage for hosting Office Hours and for collaborating on this post.Office Hours with Arbitrage #1 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 03. 20

Making requests on Erasure Bay

erasure.worldNumerai recently unveiled Erasure Bay to the world. Built on the Erasure protocol, Erasure Bay is a knowledge marketplace where anyone can request, well, anything that can be delivered as a digital file.See the request on Erasure BaySee the request on Erasure BaySetting up your accounts for Erasure BayTo start making and fulfilling requests on Erasure Bay, you’ll need two things: a Twitter account and an Authereum address.Log into or create a Twitter account:from Twitter.com2. Log into or create an Authereum account:authereum.com/signup3. Now visit the Erasure Bay website and click on one of the “Sign In” buttons:erasurebay.org4. When the dialog box pops up, click “Sign In” again to connect your accounts:5. And then connect to Authereum. Your browser should populate all fields with the Authereum account you just created or logged into (as long as you still have a live session, otherwise try refreshing your Authereum page).After connecting your Authereum account, you’ll be prompted to connect your Twitter account. Clicking “Authorize app” will redirect you to Erasure Bay, confirming your accounts have been connected.Add DAI to your account using your favorite / preferred method, and you’re ready to start summoning intelligence out of thin air.Make your first requestWith your accounts connected (make sure you’re signed in), now you can make a request.From erasurebay.org:Erasure Bay request formPut your request in the description field within the purple box. This can be any kind of file: a csv of a baseball player’s career performance, ten gigabytes of dog pictures, the answer to a Stack Overflow question, etc.Fill out the four additional fields:Reward: How much you will pay the user who successfully fulfills your request.Req. Stake: How much a user needs to stake to submit their file.Punish Ratio: How much it costs you per dollar to burn a bad submission’s stake. (For example, if someone stakes $10 on a bad submission, if your ratio is .5, it will cost you $5 to destroy their stake as punishment for a bad submission).Punish Period: How long after fulfillment you have to punish a bad actor.After completing the form, click “Make Request” and follow the prompts to confirm the details in your request.After the request is made, check out Erasure Bay on Twitter to see your request and hunt down others that you can fulfill.Making requests on Erasure Bay was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 03. 18

Where’s NMR?

The new Erasure information market, Erasure Bay, is days away. Erasure Bay will use DAI for staking, and secretly burn up NMR in the background. 🧙‍♂️Can you find NMR?Tokens promised a lot. Connecting a community to a common digital asset. Letting everyone benefit from network effects, not just a company. But now we are seeing projects like 0x doing backflips to make their token make sense. Other projects like Enigma are straight up calling their token a security. Most tokens vs Bitcoin or ETH have vastly underperformed the last few years.DAI killed tokens. It’s not perfect. It has problems of its own, and maybe worse tail risks than other tokens. But if you have an application where you require users to stake and that application says “we don’t let you stake any of the stablecoins” it’s like come on wtf that’s my safest, easiest thing to stake! DAI showed us tokens don’t make sense for consumer apps. So today we’re announcing that Erasure Bay will use DAI for payments and staking.Where does NMR go?With Erasure Bay, if you stake $100 in DAI on a request and get punished, the DAI is sent to Uniswap to atomically buy NMR and burn it. We think this is a very sensible approach to things since it allows Erasure to reach a broader base. It gives users the stability of DAI with the provable burning of NMR. Burning DAI is against its design, if Erasure were to burn the majority of DAI supply, what would happen to its stability? NMR is designed to be scarce (maximum supply is exactly 11 million). In the NMR 2.0 upgrade we decentralized the token and added burning functionality to provably decrease total supply. This allows NMR to capture the value of Erasure and continue to secure the platform while it scales to the masses.This is an important step towards the integration of Web2 sites like eBay into Erasure. Their users only see USD, but behind the scenes it’s a stablecoin that burns NMR.Where’s NMR? was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 03. 09

Rallying the Numerati

“Veritas Numquam Perit” — SenecaThe launch of Erasure Bay is just around the corner. This is Erasure’s first application for the masses. A marketplace for information of any kind. Now we turn to you, our amazing community. At last, it is time to sign up, share the word, and think about which Truths you want to make eternal.Get InvolvedErasure BayErasure Bay allows you to get from never having touched crypto to making a request for information on a decentralized data marketplace in 5 clicks. We truly designed it to work for any kind of information. This means it’s up to early users like you to think about creative requests and guide the future of the application. Once launched, Erasure Bay is in the hands of the community.If you could request any information, what would it be?Join the waitlist today: signup.erasurebay.orgPrepare a list of requestsShare the word: #erasureworldCommunity ContentWhen people look at Numerai today, all they see a quirky machine learning hedge fund. Not everyone gets it. But you do.You see that the internet is fundamentally broken. You see a team who’s built a protocol to fix it. You see proof that it works because of a hedge fund called Numerai.Starting with Erasure Bay, NMR can now be used to value knowledge through free-markets rather than centralized tech companies.We recognize that many of you have been following Erasure for some time and have deep knowledge about the potential of #erasureworld. By creating content, you can help us fix the internet.Here are some examples we really liked.Erasure explained with magical animals by Oliver Brucebody[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}1) I was talking to a friend of mine yesterday about @numerai and why I'm excited for what @richardcraib &amp; team are building. I haven't seen my thinking explained well elsewhere, especially Erasure, so thought that I'd do a basic tweet storm explaining why I think it's important. — @oliverbrucefunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Erasure Bay Art by Ørjan Rørentwitter.com/OrjanRoren$NMR Analysis by Thibault BonnivardNumeraire needed to be created (you can’t burn DAI). As a result, it’s one of the most used token on Ethereum, with a nice growth trend of the stakes (about half a million dollars as of today). Staked amounts can’t be compared with traditional DeFi protocols such as Maker where people stake for a yield. Indeed, on Erasure, the money is at risk and stakers provide very valuable value.Erasure Protocol $NMR — Token AnalysisErasure for Pandemic Prevention by Ryan @ Crypto BloxImagine these people could get out the real data, anonymously and with rewards for accuracy. Would that have helped stop the spread? Would that have helped other countries prepare and implement proper controls?Undoubtedly yes.This Crypto Could Help Fight a Future ‘Coronavirus’February in NumbersStakes have continued to grow steadily in February.https://medium.com/media/7c4e52d7cc1e6ee9543c84e5346cd303/hrefYou can now keep track of how much USD is locked up on Erasure and see the #augurflippening on DefiPulse.defipulse.com/erasureBittrex has added NMR/ETH and NMR/USDT trading pairs on US and Global exchange.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}New Markets Update: The ETH-NMR and USDT-NMR markets on https://t.co/0PDjOTWp8h are now open for trading. For more information about Numeraire (NMR) visit: https://t.co/8Fv6HeAMrO @numerai $NMR — @BittrexExchangefunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Things move quickly and it can be hard to keep track of all these numbers. You can find all official metrics here: hackmd.io/@thegostep/rkotZOuzUPushing UX to new heightsA few weeks ago, Authereum went live and they announced Erasure Bay as a launch partner. We share their vision for pushing Web3 UX to new heights and have been working hard with them to make it a reality. For Erasure to reach its full potential, we need it to be a delightful experience. With Authereum and several other great projects, we are nearly there.Here are some exciting Authereum features:Free Transactions — No gas fees, everMinimal Clicks — Only confirm critical transactionsFiat to Dai — Everything in USD with Apple Pay and Google PayBatched Transaction — Approve… Transfer… is a thing of the pastKey Recovery — Self custody without worryUsername and Password — Use what is familiarWe even built out a feature of our own which any dApp developer can use.Twitter Login — Automatically use twitter handle and picture on any dAppProtocol Releasesv1.3.0 of the Erasure Protocol is now deployed to the ethereum mainnet. This release enables staking and payments in DAI with NMR as source of scarcity. DAI stakes are converted to NMR on Uniswap to perform a burn.JobsWe are on the lookout for elite talent to help on our mission to fix the internet.Software Engineer - Head of DevX @ ErasureCommunityGeneral: Telegram / Rocket.Chat — Technical: Telegram / Rocket.Chathttps://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefRallying the Numerati was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 03. 05

Welcome to Erasure 2020

Erasure is entering a new phase in 2020. As Fred Wilson wrote, the grand vision of Numerai comes to fruition with Erasure.The Erasure Protocol - AVCIn 2019, we laid the groundwork for Erasure to achieve its full potential and bring Web3 to the rest of the Internet. This included:Launching the protocol with its initial application, Erasure QuantMigrating the Numerai tournament to the Erasure protocol — ultimately leading to an increase in stakes of 306%Hosting the first ErasureCon in San FranciscoBootstrapping a technical community with an online hackathon to build great toolsThe NumbersIt can be difficult to track growth for a decentralized protocol since by design, it’s impossible to measure metrics like daily active users (DAU) or Churn. That’s why we put together a dashboard to keep track of some of the metrics that matter the most to us: Current_Stake_Amount and Number_Agreements. Check it out at stakes.numer.ai.https://medium.com/media/7c4e52d7cc1e6ee9543c84e5346cd303/hrefNumber_Agreements keeps track of the total number of agreements created on Erasure since the launch of the protocol in August 2019. The large increase represents the migration of the Numerai tournament to Erasure in October. It's a good approximation of the number of purchases on the protocol - but does not reflect repeated purchases like on Numerai.Current_Stake_Amount keeps track of the current value locked up in all erasure agreements. This number goes up when new tokens are staked and goes down when tokens are burned or withdrawn. Below is the week over week net change in those metrics.We’re happy to share these metrics and encourage you to create and share your own dashboards — the data is freely available on Google BigQuery and GraphQL. Also, here’s an awesome Twitter bot created by a community member, we recommend a follow:body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}Weekly chart update $NMR: — @ErasureStakedfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Augur FlippeningMany compare Erasure to Augur since they are both platforms that monetize predictions or information. For the first time this month, we saw an Augur flippening — the USD value of stakes on Erasure passed the USD value of stakes on Augur.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}there is now more staked on Erasure than all of @AugurProject's markets combined ($415k vs $380k) $NMR ✌️ — @richardcraibfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}This is a great signal of where Erasure is headed. That being said, it is worthwhile to draw a distinction between these projects. While Augur is a prediction market, Erasure is a portal to a new internet. One way to think about Erasure is as AWS for Web3 — Numerai Web Services if you will. Erasure is a suite of plug and play components that make new applications possible. An oracle powered by Augur would be a great future addition to Erasure’s suite.Tech UpdatesIn the second half of 2019, we hosted our first online hackathon with CoinList Build. This allowed us to bootstrap community tooling that makes integrating Erasure a piece of cake. These tools are completely open-source and contributing improvements is a great way to learn the ins and outs of the protocol — get in touch if interested.body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}The Erasure Protocol hackathon came to a close this week. The objective was to build great tools that help developers build great applications. DevX is critical in this space to bring in more developers. Here are my favorite submissions: — @thegostepfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}Protocol ReleasesWe have released three new versions of the protocol since launch in August 2019. See these GitHub releases for details:v1.1.x in September included an upgrade to factories with significant gas optimizationsv1.2.x in December added the new escrow primitive to support paymentsv1.3.x in January enabled staking and payments in DAI with NMR as source of scarcityWe worked with some great teams on these releases, including OpenZeppelin, fulldecent, and samczsun.Erasure ClientsWe have clients in Javascript and Python that allow for caching the data on the protocol, creating Erasure identities, building a track-record, and selling data, all without needing to touch Ethereum.robin-thomas/erasure-sdkpropulsor/Erasure-jsjohngrantuk/numerai-helperankitchiplunkar/erasure.pyProtocol ExplorersDespite being completely public, blockchain data is incredibly difficult to make sense of. Explore NMR info here as well as with these cool tools:propulsor/erasureGraphpropulsor/erasure-twitter-botJobsWe are looking for a Head of DevX to join the Erasure team! If you are a talented software engineer with a passion for managing open-source communities, this is the gig for you.Software Engineer - Head of DevX @ ErasureCommunity ChatGeneral: TelegramGeneral: Rocket.ChatTechnical: TelegramTechnical: Rocket.Chathttps://medium.com/media/6b3aea4a974f8d50f6006dcb2e99f548/hrefWelcome to Erasure 2020 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 01. 31

Numerai in 2019

A year in reviewErasureErasure and NMR are critical components to Numerai’s vision of a decentralized capital allocation engine. But it also has the greater potential to bring Web3 to every application on the internet.2019 is the year of Erasure’s awakening. We designed and implemented the protocol itself, built and migrated our own applications on top of the protocol, and supported the community with grants and a hackathon to build out tools and libraries.2020 will be Erasure’s platform moment, where success is measured by how broadly the protocol is adopted and how much value it can create for its adopters.Learn more by watching the Erasure Keynote presented at ErasureCon.https://medium.com/media/22dd428fe5fd3b366c5c2cb331a424a6/hrefKazutsugi datasetThe Kazutsugi dataset is the biggest data release we have done to date. The dataset was designed to give users more flexibility in their modelling and to give Numerai predictions that could be better monetized.The actual changes behind the obfuscation are hard to see, but it represents months of research and is a big step towards our ongoing goals of providing the best available data to users and aligning incentives between the tournament and the hedge fund.Learn more about the dataset’s impact on the hedge fund in Richard’s latest post.Achieving Meta Model Supremacy At NumeraiStaking 2.0Staking 2.0 is the biggest tournament design change we have done to date. The change was designed to greatly improve the core UX of staking by simplifying the rules, providing more feedback, and streamlining the weekly workflow of users.Since the release, we have more than 2x staked models and have seen dramatic improvements in submission/staking consistency, which is critical for meta model performance and research.Learn more about tournament updates by watching the Numerai Keynote at ErasureConhttps://medium.com/media/1f6387b127d88017b2b8fadd0a470bf9/hrefUser surveyIn this first ever all-user survey, we learned a lot about the backgrounds, motivations, and preferences of the community. In particular, it became clear that we need to manage tournament changes better and do more to help onboard users to the dataset and data science problem.Thank you for the great feedback and suggestions, we look forward to hearing more from you again in the next survey. We hope you will like our upcoming changes and improvements in 2020!Season 52020 marks the 5th year of Numerai and what we have achieved together is amazing. Richard’s recent tweet captured this perfectly:body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}This week a few hundred machine learning models built anonymously on obfuscated data and staked with cryptocurrency were used to allocate capital in every major stock market in the world. (weird to say out loud but actually real @numerai) — @richardcraibfunction notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}A big thank you to everyone in the community who has helped make this a reality.To upending the hedge fund industry!Numerai in 2019 was originally published in Numerai on Medium, where people are continuing the conversation by highlighting and responding to this story.

Numeraire

20. 01. 08

Transaction History
Transaction History Address
Bittrex Short cut
HitBTC Short cut
Bilaxy Short cut
Poloniex Short cut
Security verification

There is security verification of the project
Why security verification is necessary?

Read more

comment

* Written questions can not be edited.

* The questions will be answered directly by the token team.

Information
Platform ERC20
Accepting
Hard cap -
Audit -
Stage -
Location -