Jim Simons Trading Secrets 1.1 MARKOV Process




Jim Simons is considered to be one of the best traders of all time he has even beaten the like of Warren Buffet, Peter Lynch, Steve …


25 Comments

  1. The second part of the video that talks about how Jim Simons generates simulated data can be found on our Youtube channel. If you have no experience in python watch our full "Algorithmic trading in Python Zero to Hero" Video also found on our channel

  2. I suspected Jim Simons also does correlation trading based on what mathematicians call 'action limits' within 'activity networks'. When two financial instruments or asset classes deviate from known means and standard deviations over time, an 'action limit' can be set in the algorithm, for example: changes in the price of a 10 year T-note has shown a strong correlation to the price of copper divided by the price of gold. If a rare event occurs beyond three standard deviations say, calculated by a computer program, then it is highly probable the price of copper will fall and the price of gold will rise so the correlation with the price the T-note regresses toward the mean more. The 'action limit' looks like mu = +/- 2.5 STD/square root of N, where N is the number of values in the sample of copper/gold ratios, say. So the computer will automatically short coper and buy gold at certain times until the action limit is no longer triggered. It has to do with 'critical path analysis' where vertices in the path represent different activities to be performed, as in the computer generating orders to buy, sell, short, etc.

  3. One more thing for quant traders If you'd like to know: Even after finding the transition matrix, there is one more concept of validity, If an environment is of low validity, where the outcome is highly random like cointoss, then let's say in a cointoss of 100 instances, you get 90 heads and 10 tails, on the 101th toss, what is the probability of that toss? It's certainly not 90% heads and 10% tails, right? It's still 50% heads and tails equally still because the outcome it random. The above markov explanation only works when the numbers in the transition matrix actually has some reasoning backing to it. Nice work @quantprogram.

  4. Beautiful work.

    Now we as your audience can help optimise the code and share findings.

    For example there is no need to calculate up_to_up and then calculate up_to_down, simple statistics allows us to perform this instead Probability(up_to_down) = 1 – Prob(up_to_up). So if you calculate one, you know the other. Bayesian statistics.

  5. Please keep making more videos about reinforcement learning concepts this is amazing, no on else on youtube is breaking down these concepts as gracefully as you just did, phenomenal stuff man. Thank you

  6. Question: At some point it is stated that Markov chains do not care about what the history of states or the previous state was, but I feel like this is contradicted by then showing a model where we check if the past 3 days have been loss days. What am I not understanding? Do we consider "4 days of consecutive loss" to be a single state?

Leave a Reply

Your email address will not be published.


*