Yuval’s guest today is Scott Genin, Vice President of Materials Discovery at OTI Lumionics. Scott outlines his role in overseeing materials design and simulation programs, focusing on OLEDs and cathode patterning materials. He highlights OTI’s growth and its strategic integration of quantum-inspired methods due to current limitations in quantum computing. Scott remains optimistic about the future of quantum but emphasizes the need for further advancements, particularly regarding error correction and cost efficiency. He also shares insights on a novel encoding method that allows OTI to emulate up to 80 qubits classically, significantly improving their simulation capabilities. We segue into the potential for quantum computing to impact vibronic spectra calculations, the importance of achieving meaningful results within a competitive business environment, and much more.
Transcript
Yuval: Hello, Scott. Thank you for joining me today.
Scott: Thank you, Yuval. It’s a pleasure to be here.
Yuval: It’s nice to have you again. I think we spoke about two years ago, but for those that haven’t listened to the previous episode, could you remind us who are you and what do you do?

Scott: Yeah, so I’m the vice president of materials discovery at OTI Lumionics. I oversee and manage the materials design and simulation programs, which we use to design, you know, our main state product cathode patterning material and other OLEDs and materials. So that means that I also oversee and do research into quantum computing methods because it’s the potential frontier for doing quantum chemistry simulations. Material simulations underpin a lot of what we do at OTI, and it’s one of the reasons why we’re able to enter into mass production very rapidly. make changes for our customers whenever they do when they request it and be really ahead of the curve in terms of you know novel material design
Yuval: in OTI where are you located how large is the company give or take yeah
Scott: so OTI Lumionics is located in technically it’s Mississauga it’s right at the boundary between Mississauga and Toronto in Canada and we’re about 60 people now So we’ve grown, but we still remained relatively small for such a, you know, for a chemical company.
Yuval: When we spoke last time, if I remember correctly, you mentioned that you tried to do some simulations on quantum computers. They weren’t good enough. So you went to quantum inspired methods. How’s that going? Are you still at quantum competing holdout? And what would it take to change your mind?
Scott: I mean, I’m very impressed with the, you know, particularly the surface error correction code developments that have really happened. There’s, you know, and even, you know, some quantum hardware vendors have really posted excellent benchmarks on, you know, doing quantum phase estimation. But even with all these improvements, I still remain, you know, a bit hesitant that quantum phase estimation. quantum computers are going to start simulating materials in significant, you know, manner, because first and foremost, the inaccuracy of these methods that are posted or published is still far worse than what we can do classically and even what we can emulate a quantum computer doing on classical hardware. And the, well, I guess the other thing is cost and runtime, Right? These, you know, the, how long does actually someone have to end up spending to do all this time on a quantum computer and then how much does it end up costing, you know, considering that quantum computers, I know there’s some innovation happening in the cryo fridges and I’m not entirely up to date, but at last, you know, last I recall is that they still require helium three to some extent and helium three is not cheap. So like what’s the run cost of these quantum computers? But. You know, I am still hopeful that quantum computers will continue to, you know, that will continue to advance and improve. They definitely, I know that they haven’t met, you know, many quantum computing companies post road maps and very few of them have matched their own roadmaps that they post. But from my own internal roadmap, I think that they have actually exceeded my expectations. It’s just that they still have a very long way to go to actually be impactful. The biggest piece of advice for anyone who’s actually listening who wants to do this is please stop using STO3G basis. You know, we do a cyclo-hex, a cyclobutane ring opening with STO3G basis. Like that doesn’t exist. It’s an artifact of how basis sets are constructed. You know, STO 3G hydrogen, it doesn’t exist. This is just like these things are beyond a toy model at this point. They are, you know, they’re contrived examples of things that do not exist in reality and don’t follow, you know, experimental measurements. And then, you know, why are things so far away from experiment? Please stop using STO3G basis. and yet, I continue to see the quantum competing hardware vendors just continue to use that. So that’s my piece of advice. You want to really get, I think make a huge impact as you’re going to some far more practical basis set, even for a small molecule. I think that would really demonstrate a serious commitment to actually doing quantum chemistry from the hardware vendor perspective.
Yuval: So if we, let’s not talk about a dream scenario, but let’s maybe talk about a practical what would make you interested in quantum computing in terms of how large does the molecule need to be? What kind of algorithm are you trying to run? And what might be a reasonable cost to simulate that molecule on a quantum computer? What’s the target that the industry should strive for?
Scott: Well, so I think that you need at least kind of 70 perfect. qubits. This isn’t like logic. It’s more like algorithmic qubits or what they call algorithmic qubits, meaning that you have also all the auxiliary error corrected qubits. They also need to be fully connected. If you’re going down the electronic structure calculation route, the challenge by going down that route is that, you know, you’re kind of either going using variational quantum migraine solver or quantum phase estimation, but, you know, our internal simulator and our results that we’ve published kind of shows that, you know, variational quantum magnet solver is not, you know, it’s very replicatable on a classical computer. So quantum phase estimation is kind of your only, not only, but it’s one of your core algorithms that would actually generate some potential value there. But I would say that you have to aim for like 70 perfect qubits because the gate sequence on these quantum phase estimations is just outrageous and how deep and complex the circuits are. But, you know, with OTI, we’ve taken also a slightly different approach is that for at least our algorithms that we embed onto universal quantum computers, we’ve partnered with Nord Quantique, which received, you know, one of the three Canadian companies selected for the major quantum benchmarking program by DARPA, because we believe that their qubit architecture is very stable. It’s a very, you know, it has a lot of promise. But in that respect, we’re actually pursuing more, what I would say, auxiliary algorithms for quantum chemistry, like vibronic spectra calculations, because we know that these algorithms are harder to match to experiment classically. And, you know, there’s a lot of, a lot of approximations that go into them that a quantum computer actually does have a theoretical advantage over. With quantum phase estimation and VQE, some researchers have posited, is there even an advantage to running these things on a quantum computer because we have classical methods that technically don’t scale exponentially, right? So what’s the speed up that a universal quantum computer can provide to the electronic structure calculation? There’s probably some speed up, but then, you know, if it’s the difference between, you know, a thousand dollars versus $10 in terms of computational costs, you know, no one’s going to bite that. We’re going to, everyone in the applied field would rather pay $10 and just wait an extra day to get the result than to pay, have to pay $1,000, right? That’s one of the challenges that with the GPUs, quantum chemistry simulations, and particularly a lot of the open source, high quality quantum chemistry algorithms and codes, there’s a lot of cost competition. But, you know, underserviced algorithms or underserviced areas like Vibronic Spectra is one area that we believe there’s a lot of potential in, and that’s why we’ve pursued this collaboration with Nordquantique.
Yuval: So 70 or 80 perfect qubits. The comparison for you is a couple tens of dollars of execution cost. Runtime, it sounds like you, today you’re waiting a day or two to get your classical calculation. So something in that order of magnitude. It doesn’t have to be two minutes, but probably you don’t want 20 days either.
Scott: Yeah, yeah, 20 days is unacceptable because you run into the, problem that like a chemist can make it and then at that point you are like think of it from a business operations perspective your boss who doesn’t know much about the you know the intricacies just looks chemist made this in less than 20 days and you still don’t have a result simulation wise like you’re not it’s not a battle you’re winning in a corporate environment
Yuval: I think you had an announcement about some developments that you are able to emulate these 80 qubits classically. What’s the, to the extent you can tell me, what’s the secret? What’s the breakthrough relative to where you were a year or maybe two years ago?
Scott: So in terms of the emulation of the qubits, it’s not two years ago. We’ve already announced that we can scale up to like 200, even 300, depending on the amount of RAM you can throw out this problem. But we developed the encoding method that we have a publication patent record on now that has been underdeveloped since like 2021. Basically, unlike using a state vector approach where conventionally the number of qubits or the number of state vector scales with the number of qubits exponentially to the power of then. And then, you know, the techniques on kind of simulators is to basically then start dropping very small terms. It’s a very valid approach. It works quite well, but it does, you know, things still scale in an exponential way, right? You’re just kind of lowering the pre-factor, maybe massaging the exponential scaling a little bit down. But we’ve, you know, through our encoding method, we’ve developed a, basically it has polynomial scaling with the respect to the number of qubits. So it scales, you know, oh, n to the four, oh, and to the three. That’s how we cram so many qubits onto it. The challenge before is, you know, kind of a challenge that I think is valid criticism from certain people in the community. You know, is that previously we were kind of using this, this simulator that can scale, you know, up to three. 300 qubits if you have at least a terabyte of RAM. Now we’ve made a lot of RAM efficiency. So now 300 qubits doesn’t take 4 terabytes of RAM anymore. Now it’s closer to 2. Still, this thing is a pig on RAM. So there is a point. I want to make this clear. I still predict there will be a point at which a quantum computer will exceed even what we’ve done. Just, you know, that date has been pushed back further. The, the, um, previously is we’ve used a technique called, uh, that basically allows us to use a non-variational approach to estimate the ground state energy of that Hamiltonian. And this happened very quickly. Like this was a calculation that happened on the order of hours for like 300 cubic system. It was, you know, with through running, uh, PT2 and, uh, Eshbin-Neshbit corrections combined with cubic cup and cluster. We could get a very good estimate of the ground state energy of the Hamiltonian within the order of hours. And this is what we published back in like, you know, 2022 with a very old version of the code that was still prototyped and scientific, you know, and converting into a high-performance code. Now we get these incredible efficiencies. But, you know, one of the criticisms, and then we developed ILC, and that was an algorithm that we developed in, you know, in collaboration with the University of Toronto. you know, using Beryn-Wigner multi-reference corrections. Like this basically shut down. Like it ended up shutting down an entire path that people wanted to pursue on a quantum computer, which is using like these non-variational approaches that, you know, Google even published a paper into. Basically, we ended up kind of blocking that entire path because we could just replicate this on a classical computer so much more efficiently. And it wasn’t like, oh, there’s an error to it. It’s like, no, no, this is an exact calculation. The problem is that it’s not variational. So one of the criticisms is that you sit there and you say, but this isn’t variational. And that’s a valid approach. What this new paper does is it outlines the variational approach to achieve the similar result. And so now you actually can execute both any of this. Again, this thing is done within 24 CPUs calculating this thing within 24 hours. It’s very rapid. It’s very fast. And so the trick in it is to understand that the, and the, but the, well, the other bigger, the other advantage in it is that one of the things at least with variational quantum migraine solver, that’s not necessarily even solved very well. You know, we have a DAPVQE. Adapt VQE basically tried to solve this optimization problem by making the onsop more optimizable. In this case, we don’t even have that issue, right? Previously, we’ve always optimized cubic-a-couple-cluster ansatz using gradient optimization methods. This allows us to continue that approach. So we don’t need these, you know, meddler-mead non-derivative approaches. We can exactly calculate the and optimize all these parameters at the same time, you know, 500,000, a million RZ rotation gates on a circuit, optimized perfectly. The trick is to basically bring back downfold the exponential scaling of that, right? Because normally when you have to construct that the Heshean that allows you to use a gradient optimization, the Heshinel explodes exponentially. But one of the things is that most of the time, the amplitudes are not, that you’re trying to optimize it, not particularly very big. So taking a number that’s less than 0.02 and multiplying it. it by 500, basically putting it to the power of 500,000 gives you a very tiny number, right? So while it’s important to include these gates or entanglers on the circuit, why do I have to optimize it to, you know, with with respect to all its other, what is it, all other parameters? That’s why it’s called like a polyoptimization process, right? is that there’s a truncation of the kind of this polynomial function or to the degree. And that allows you to basically encode it onto a CP, onto classical memory very efficiently and to optimize it using a gradient. Now our, you know, our simulator basically only calculate these expectation values. That’s why we can calculate them with zero error. And because of that, we can execute these rapid optimization.
Yuval: Is this used just internally or do you make it available in some fashion to others to use?
Scott: Currently this is only in use internally, but I mean you can always send me an email if you’re interested in it. But I mean, we negotiate pretty hard on the terms, right? because we don’t want any sort of contamination with our other IP, right? But, I mean, a fair offer is a fair offer. And so we definitely, I know, we’ll definitely entertain any of those discussions. But currently, we only use it internally.
Yuval: You mentioned that you have internal expectations of quantum computing vendors and you sometimes compare it to the roadmaps. And I think you mentioned that many quantum computers vendors did not meet the roadmaps, but exceeded your internal estimation. So if you wouldn’t mind, let’s extrapolate this. What do you internally estimate that quantum computing vendors will be able to do in the next one or two years?
Scott: Yeah. So I think that, so currently I believe the kind of the approximates, I like to use the standard of how many two cubic gates can a quantum computer like execute while still retaining a valid like answer, right? Doesn’t even have to be 100% perfect, but like how many, how many C not gates can you execute? Currently, I believe quantum is able to execute like 2000, around 2,500, which is was above my expectation. I thought that this year the record would kind of be like, you know, like, you know, like 1,500. So they’ve done an excellent job in terms of exceeding my expectations. I suspect that that number would probably, you know, maybe go up to 4,000. I think we’d be like just sub doubling every year. But typically it’s more like we double once every like, I expect that that number would concretely be passed at least two years out. I think eventually it will slow down because even with these surface error codes, unless another massive breakthrough happens, you know, like the breakthrough that Google did was incredibly fantastic for making these things work, right? It’s a testament to their engineering team and to their theoretical team in terms of their dedication to, you know, addressing a core problem with quantum computers. I would, like, by, you know, I know some people, some companies have positive that they’ll be able to execute like 100,000 two-qubit two-cubic gates. I think we would be, I think we’d be doing great even if it got the 20,000 two-qubit two-qubit gates.
Yuval: So 20,000-2-qubit gates, which I guess typically people think about it as one in 20,000 and so you could make what the, you know, the fidelity of the two-cubic gates. Would that be? sufficient to your application or is that just your estimate of where the market’s going?
Scott: No, that’s my estimate of where the market’s going. It would not, right? Like we did over a million, right? And one of the inherent challenges is that unlike what, you know, when we do, when people do like the random kind of what is the cross-enter calculation, the two cubic gates are nicely lined up onto one cycle of execution, right? We don’t have that each they must follow one after the each other right so the depth that we actually measure as much is a bit more complicated than just the cross-entropy calculation but i still think that that’s like you know that would be fantastic i think that opens up some kind of more niche applications um i think it wouldn’t make quantum computers mass market for quantum chemistry simulations because I think you still need to like to it’s a very challenging barrier for electronic structure because there are so many like the legacy of electronic structure is so deep and there are so many esoteric techniques that people can use like you know CCSD was already you know oh scales scales oh n to the seven or oh n to the eight and yet people use DLMPO to basically bring that scaling down dramatically. And then for a lot of applications, it’s like it works well enough. Right. And that’s one of the things that quantum compute, you know, in an electronic structure format, quantum computing will have to, will always be challenged by the just computational speed of GPUs and they’re, they’re continuing decrease in cost. But I do think that, you know, by doing 20,000 scenes. not gates, you’ve opened up things in vibronic spectra calculations. And we know that those things do not scale well on classical hardware. And the error rates on classical hardware are pretty, like the deviation between experiment and a simulation that is pretty broad. And so that’s why, you know, we think that by the time we get to, you know, few, you know, 70 cubic, 70 perfect qubits that can execute that can nearly perfectly execute 20,000 two cubic gates. We will start seeing the beginning of where it’s like, yes, you know, the output of this is actually meaningful. Maybe the cost is prohibitive, but at least it’s like this is a meaningful output, right? Because right now, I hate to say it, but like which quantum computer has actually produced a result that is even remotely close to FCI, right? it’s just none. And that’s, you know, we can sit here and we can forecast that quantum phase estimation is going to actually do it. But until someone actually does it, it’s just a hypothetical. And my concern is that usually in quantum chemistry, there’s something that is going to pop up that basically will jump scare people. And like this, because it’s just, you know, they’ll be like, oh, you know, we did quantum phase estimation using the STO3G basis. Yeah, but the Hamiltonian is. sparse? What if we had a very dense Hamiltonian with a completely different basis? Like, oh, now it doesn’t work and the runtime has a scaled exponentially. It’s like, great. But the thing that actually I know gets me the result, you know, the actual result that’s comparable to experiment, like, you know, it’s not even feasible on it. It’s like, oh, you know, we’ll get there. So I just think that we have to be a bit cautious about, you know, forecasting exactly when a quantum beer is going to take on electronic structure. But like I said, there are many other quantum chemistry algorithms out there that have a lot of meaning that are underutilized because they’re just not accurate enough. And that is an area where I think that quantum computers, because they can solve the exponential scaling, you know, they can bring down the exponential scaling of those problems into polynomial scaling. That is where we will see the first applications. But again, these applications are going to be niche, right? You know, working in OLED’s vibranx specter means a lot to me. There are many other domains of materials or pharmaceuticals upon which this calculation has no meaning. But, you know, very high value materials like OLED’s simulation cost is something we can afford to execute upon, right? And so that’s where I think that’s why, you know, we’re so dedicated to, you know, so dedicated to, you know, so dedicated to, um, advancing our own theoretical state-of-the-art or simulators state-the-art, but also partnering with quantum hardware vendors that we see a lot of potential in.
Yuval: So we spoke two years ago and today you say, okay, I can simulate 80 qubits in a specific algorithm and I get good agreement with experimental results. Where do you expect to be in two years?
Scott: No, much further. I expect that even by the end of this year we’ll have like above 100 qubits with very good agreement with experimental results, right? Like I said in my, as I publicly announced, this is just the beginning, right? We’re putting the paper is more so putting a lot of other groups on notice that claims about, you know, outsourcing quantum simulations onto classical hardware. I don’t want to name names, but, you know, you’re kind of on notice because I’m, I can replicate these things very efficiently. So if you’re going to claim I’m doing quantum chemistry on a quantum computer, you better be doing the whole thing on it. Right. You got to really commit to running these things on a quantum computer because we’ve just found. But two years from now, we will be probably easily doing 300 qubits, my prediction. And I think our speed, you know, as we continue to make algorithmic improvements, as we continue to upgrade the hardware and figure out how to access more CPUs, and better distributed RAM, you know, we’re going to probably break that 300-qubit barrier and, you know, hopefully we’ll have, you know, incredibly good results that will agree with experiment. But I’d say just stay tuned for the rest of the year. There is going to definitely be more exciting results coming out, I guarantee you.
Yuval: I ask most of my guests who they would have in a hypothetical dinner.
Scott: I remember this question.
Yuval: I remember your answer from two years ago, but let me ask it again. So as we sit here today, who would be your hypothetical dinner guests? You don’t have to have this time. I think there’s
Scott: two approaches, right? So I would say that for me, there’s always like this theoretical basis. So I’m going to be anyone I would like to at least be able to. to talk to Neil’s bore, right? Because particularly, you know, this, I kind of gripe about this thing, which I think sometimes gets me in trouble is this like multi-world, infinite multiverse thing. And to me, I guess maybe because I have an engineering mindset and my training is in it, is actually in chemical engineering. I fail to see the logic in how people say that just because something is probabilistic, it suddenly means that every single outcome is possible. So I would like to talk to Neil’s bore about that because maybe that’s too self-affirmation, though, because I believe that his interpretation is just that sometimes things are probabilistic. I’d have to say that’s yeah that’s it because I’m just I don’t know I think as quantum computing is becoming bigger there’s also like more noise associated with it I know that I still think that you know hardware vendors are optimistic because hardware vendors are optimistic you know we all got to be somewhat optimistic especially about the stuff we’re selling but I still think that we’re meeting good milestones right. I’m not disappointed one bit. But I see a lot more like people posting about this multiverse and the you can see like black mirror just distorting what a quantum computer is and like what’s it’s actually going to end up doing. And I just, yeah, I don’t really know how to, I just, it just bothers me for some reason. I don’t know. I’m just so bothered that by, oh, every, every option exists. This multiverse interpretation just bothers me to the core because it’s like, it’s so much more just like, well, because many things could happen. It’s like they must all happen. It’s like, yeah, but why? Right? Like why from the point of perspective is that? It’s just like sometimes things happen randomly. It’s like, sure, but why must everything happen? And I remain highly unconvinced. So I would prefer to ask one of the core founders of quantum physics, like, is this even the interpretation they would have thought would have made it into mainstream media at this point?
Yuval: Scott, thank you very much for joining me today.
Scott: Okay. Thank you very much.