Lazar Valkov

I am a postdoctoral researcher at the MIT-IBM Watson AI Lab. I received my PhD from the University of Edinburgh, under Prof. Charles Sutton, and my MSc in Computer Science from the University of Oxford. I have also completed research internships at Amazon and Meta.

Research Highlights

Modular Continual Learning - showed that neurosymbolic methods can attain different types of knowledge transfer ( ) and can scale to large data streams ( ).

Continual Pretraining - showed that continual pretraining of foundational models can be framed as a multi-armed bandit problem. The resulting method achieves SOTA results on reducing forgetting ().

Research

Focus: Enhancing learning efficiency of neural networks to reduce their reliance on large data sets.

Approach: Introducing inductive biases into models:

  • by biasing a model's weights based on similar tasks
  • by augmenting a model's architecture using neurosymbolic methods.
Philosophy: I aim to develop principled methods by initially tackling simpler problem instances, then scaling these methods to more complex scenarios.

Current Research Directions
  • Continual Learning
  • Reasoning for LLMs


email: <first name>valkov@gmail.com