About me

About me

About me

  • Currently working as a Lead DX Engineer at Prefect & living in Berlin (Germany)
  • Past experience as an IT consultant, Data Engineer & Python Backend Engineer in various industries (audit, aerospace, e-commerce, financial & energy trading sectors)
  • Technical writer - I wrote over 100 blog posts on Medium alone (state as of early 2023).
  • Goal: support data teams in building reliable data ecosystems and sharing knowledge
  • AWS Certified Solution Architect, passionate about building scalable and sustainable solutions to complex business problems

Past NLP research

Together with Prof. Roland Müller, we published 2021: "Research Method Classification with Deep Transfer Learning for Semi-Automatic Meta-Analysis of Information Systems Papers."

Here are some interesting findings I got from that research:

  • Classification of large text documents is MUCH harder than classifying shorter texts such as tweets or emails. In this paper, we tried to predict the correct categories by taking the entire documents as inputs. Long texts make it harder to a model to distinguish between signal and noise and learn useful feature representations.
  • Multilabel classification is much more challenging than a binary classification (ex., fraud or not, spam or not) because each text document can be assigned different categories. Often, datasets used to train such models are imbalanced (prevalence of one most common class).
  • Even though transfer learning significantly improves the learned representations, deep transfer learning (ex. ELMo, BERT, ULMFiT, OpenAI Transformer) allows to learn more context-dependent word representations, which are much richer than shallow transfer learning techniques such as word2vec or GloVe.