
My Learning Journey
I’ve been developing my programming skills, with a strong focus on Python, while also exploring other languages and tools through courses on DataCamp. Along the way, I’ve been honing my quantitative skills, building a stronger foundation in mathematics and statistics. Below highlights the courses I’ve completed, reflecting the knowledge I’ve gained in programming, data analytics, and practical problem-solving.
* Note - These are in reverse chronological order.
Data Analyst in Python
I completed DataCamp’s Data Analyst with Python career track, consisting of 10 courses, 5 real-world projects, and a timed skills exam focused on data manipulation (score: 84/100). Throughout the program, I learned to clean, analyse, and visualise data using Python, pandas, NumPy, Matplotlib, and Seaborn. I practiced importing data from multiple sources, handling missing values, merging datasets, and performing Exploratory Data Analysis (EDA) to uncover trends and insights. The track also introduced SQL for querying relational databases and integrating structured data into Python workflows. Through hands-on projects, I applied these skills to analsze diverse datasets, including business, environmental, and social data. Overall, the program strengthened my ability to manage the full data analysis workflow; from acquisition and cleaning to visualization and insight communication,
preparing me to deliver data-driven solutions in real-world contexts.

Sampling in Python
I learned to assess the accuracy of sample statistics by calculating relative errors and generating sampling distributions to measure variation in estimates. Working with datasets such as coffee ratings and Spotify tracks, I practiced applying these methods to quantify uncertainty in real-world contexts. A key focus was mastering bootstrapping: using resampling techniques to estimate variation when the underlying population is unknown. This helped me distinguish clearly between sampling and bootstrap distributions, reinforcing how resampling provides a flexible way to support inference and hypothesis testing.

Introduction to Data Visualization with Seaborn
I learned to created and customised plots such as scatter, bar, box, and count plots using real-world datasets. I practised building subplots, adding confidence intervals, and tailoring visualisations to clearly communicate insights. Worked with scatter and line plots to explore relationships between variables also enhanced my quantitative analysis skills. Seaborn is often preferred over Matplotlib for its simpler syntax and attractive defaults, and over Plotly when static, publication-ready visuals are needed.

Web Scraping in Python
I completed a web scraping course in Python, where I learned the structure of HTML, XPath, and CSS selectors to navigate and extract information from web pages. Using the scrapy library, I built simple spiders to crawl sites and automate large-scale data collection, while also gaining transferable skills for libraries like BeautifulSoup and Selenium. This course was both challenging and rewarding, huge thanks to my brother-in-law, Seb Strug, for explaining concepts and helping me with this one!

Introduction to Statistics in Python
I learned core concepts for analysing and interpreting data. I practised working with summary statistics such as mean, median, and standard deviation, and explored probability through random sampling, binomial models, and distributions including normal, Poisson, exponential, and t-distributions. The course also covered correlation, scatterplots, and study design, strengthening my ability to draw reliable conclusions from data using statistical reasoning.

Introduction to Functions in Python
I learned to write custom Python functions tailored to real-world data problems, building up from simple definitions to functions with multiple arguments, return values, default parameters, and variable-length inputs. A key step was understanding scope, ensuring functions behave as intended in different contexts. I also explored lambda functions, which provided a way to write concise, on-the-fly solutions, and practised handling errors to make my code more robust. Along the way, I came to appreciate the importance of writing clear docstrings, recognising how they make functions easier to use, share, and maintain. Applying these skills to Twitter data reinforced how thoughtful function design can streamline analysis and problem solving in data science workflows.

Exploratory Data Analysis in Python
I learned to clean, summarise, and validate both numerical and categorical data. Using real datasets, I practised handling missing values, explored relationships with Seaborn visualisations, and examined case studies such as alcohol use and student performance. A key skill I developed was generating hypotheses from findings, reinforcing how EDA fits into the wider data science workflow and guides the next steps in analysis.

Introduction to Data Visualization with Plotly in Python
I learned to build high-quality, interactive charts directly in Python. I practised creating univariate and bivariate plots, customised visual elements such as annotations, hover effects, and legends, and combined multiple chart types in layered visualisations. The course also covered interactive controls like sliders, buttons, and dropdowns. These skills directly complement my work at Bloomberg, where I use Plotly extensively for data analytics visualisation.

Intermediate Importing Data in Python
I completed an intermediate data importing course in Python, where I learned to access and extract data from the web. This included working with HTML for the first time, scraping and parsing web content, and pulling data from APIs such as OMDB, the Library of Congress, and the Twitter streaming API. Despite using the Bloomberg API daily in Excel and BQuant, this course gave me valuable hands-on experience integrating APIs directly within a Python environment for analysis and visualisation.

Introduction to Importing Data in Python
Here, I learned how to load and manage data from a wide range of sources. I practised importing flat files such as CSV and TXT, as well as working with Excel, SAS, Stata, HDF5, and MATLAB files using NumPy and pandas. The course also introduced relational databases, where I wrote and executed SQL queries to filter, join, and extract meaningful insights from structured data.

Joining Data with Pandas
Focused on combining and reshaping datasets to uncover deeper insights. I learned to use inner, left, right, and outer joins, as well as advanced techniques like semi-joins, anti-joins, and concatenation. The course also covered merging on indexes, handling time-series and ordered data, and validating results. I also practised querying tables using a SQL-style approach, strengthening my ability to organise and analyse complex datasets.

Introduction to SQL
​I completed an introductory SQL course where I learned how relational databases are structured and how to query them effectively. I practiced writing SQL queries to extract, filter, and customize results from tables, gaining hands-on experience with real data. The course also introduced the differences between PostgreSQL and SQL Server, giving me a solid foundation to build on for future database and data analysis work.

Intermediate Python
This course focused on strengthening my coding and data handling skills. I worked with dictionaries and pandas DataFrames to organize and manipulate datasets, and practiced Boolean logic, control flow, filtering, and loops to make programs more dynamic. The course also introduced hacker statistics, giving me practical experience applying probability concepts in Python while expanding my ability to analyze and visualize data efficiently.

Introduction to Python for Finance
The Python for Finance course, combined programming fundamentals that I had learned previously with financial applications. It enabled me to revise core Python skills such as variables, lists, and arrays, and used NumPy and Matplotlib to manipulate and visualise data. The course concluded with a financial analysis of S&P 500 data, where I applied these tools to summarise sector trends, plot ratios, and identify outliers, strengthening both my coding and quantitative finance skills.

Data Manipulation with Pandas
The Pandas course focused on data manipulation and analysis using real-world datasets. I learned how to inspect, filter, slice, and aggregate DataFrames, as well as import, clean, and transform data for deeper insights. The course also introduced creating visualizations directly from DataFrames, giving me practical experience in exploring and presenting data effectively with Python’s most popular data analysis library.

Introduction to Python
My first Python Course! I learned the fundamentals of coding with Python. I practiced using Python as a calculator, working with variables, types, and lists, and manipulating data. I also explored functions, methods, and packages, gaining experience with reusable code. The course finished with an introduction to NumPy, giving me hands-on skills in working with arrays and analyzing data
