Since this is the sales pitch, let me try to overcome the academic impostor syndrome: I am a T-shaped person who has done work on almost all aspects of computing, machine learning and optimization.
Originally trained as an electrical and computer engineer, I am currently a researcher and terminal Ph.D. student at the LIONS laboratory at EPFL. My doctoral studies started with neuromorphic computing hardware - trying to build brain inspired ML accelerators - and slowly drifted into applied ML and ML/optimization theory.
This has given me the chance to work with the “full stack” of ML research and engineering. During my Ph.D. I have done hardware design, systems programming with C,C++ and Rust, written my own CUDA kernels as well as large scale ML and RL systems in tensorflow, pytorch and jax. I also supervised over 20 students and interns and worked with half a dozen industrial partners. Finally in the last decade I’ve not only been been a Ph.D. student, but also the co-founder of 3 startups, a software and ML prototyping consultant, the sys-admin of our lab, a contributor in lobbying for improved AI governance, and a board-member of various association and NGO boards.
While my current research focuses on sample efficient distributional modeling and reinforcement learning methods on combinatorial domains like sets and graphs, I got into ML wanting to understand decision making processes in adaptive systems.
Together with a politically conscious socialization in the german hacking scene (particular CCC influenced online spaces) this has developed into studying the intersection of computer science and politics and economics , although my lens is of course mainly shaped my engineering and ML background.
As such you can split my research into a technical track and into a humanities (polisci/economics/governance) track.
On the technical side I am interested in using ML methods to study and improve corporate, economic and political decision making processes1. I have also been drawn to more applied industrial problems like medical imaging or industrial control systems, where I care about ensuring the correctness, robustness and fairness of the deployed systems.
On the humanities side I am interesting in the interaction of specific algorithmic systems and their biases with humans and society at large. In particular, I want to contribute to mechanisms of accountability that ensure the benefits of AI systems are not monopolized or abused 2 and improve our understanding of governance structures in general 3.
Somewhere in between is the subject of algorithmic mechanism design and game theoretical analysis of existing polito-economic systems, for which I gave a rough outline for a research plan in a talk aimed at a non-technical audience at assoziation E.
As I wrap up my Ph.D. I am looking for new opportunities that allow me to pursue these interests in a research, applied research or leadership role that allows me to remain in Switzerland physically (this is a hard constraint). In particular, I am interested in working with private enterprises (tech firms, financial firms, startups and foundations or private labs) towards the end of 2023.
A non-exhaustive list of roles for which I think I might be a good fit is given below, but I am also open for consulting work, or out of the box ideas. Generally, I’ve found that the most interesting opportunities are those you didn’t expect, so please don’t hesitate reaching out! (Email at the top).
Roles which I consider myself a good fit for:
An example for governance would be using natural language processing to quantify the consistency between the proclaimed values of corporate and political entities and their behavior in markets/the political process, while an industrial application would be portfolio optimization, risk managing, forecasting or autonomous industrial online-bots and physical drones. ↩︎
See e.g. this Science publication I contributed to in the context of AI governance ↩︎
See e.g. publication on a complexity theory based view on policy making ↩︎
Time Scarfe interviewed my friend Carla Cremer and me about our perspectives on X-Risk, Governance in EA and AI risk. I wish we had had time to go into my points about all reasons I don’t believe in AGI risk, but I am still glad we could make our points in such a nice format.
Although I am NOT an EA for a various reasons, the Swiss EA communities’ focus on AI and governance mean I keep hanging out with them. I’ve served as a board member of the EA Geneva association, have tried to bring a diversity of philosophies and angles of criticism into the community as a part of their facilitator program and sporadically give talks about AI risk.
I’m a board member of the alumni association of the Manage And More entrepreneurial scholarship, serve as a Mentor to young Entrepeneurs (e.g. as part of the 2022 ESADE eWorks program) and offer online ‘office hours’ for young founders to critique their business and technology stack. I also occasionally give talks at events, e.g. about blockchain technology at BSL
Appeared as one of the experts on a special program on AI and algorithmic governance organized by the Swiss German national TV station SRF