Course Overview


Our Analytics Engineering course is an ten-week, part-time program that will give you the skills you need to go from data analyst to analytics engineer.

The course is fully remote — students can expect to spend 3-4 hours a week working their way through lectures and exercises asynchronously, with a one-hour live session with their instructors and cohort to dive into deeper discussions.

Tools you’ll master

  • The command line
  • Git and GitHub
  • dbt (data build tool)
  • Cloud data warehouses
  • Python

Note: We’ll be using BigQuery in the course, but most of what we learn will be relevant for users of Redshift and Snowflake.

Skills you’ll learn

  • Testing patterns (in SQL and Python)
  • Code collaboration via git
  • Data modeling
  • Advanced SQL patterns
  • Debugging
  • Data warehouse performance optimization


Week 1

Before the course, we’ll ask you to spend about 30 minutes getting your computer set up, and writing a SQL query, which we’ll use as the basis for later lessons.

In week 1 we get everyone familiar with their development environment, do student orientation, and discuss the history of analytics engineering.

Week 2

Command line basics | Version control & Git
Week 1 covers a lot of ground — for some students most of this content will be familiar, but for others it may be mostly new! We (and the rest of your cohort) will be here to support you.

First, we’ll learn how to use the command line to navigate our computers — important groundwork for many of the tools we’ll learn in the course!

Then, we’ll cover all things about version control and Git! From creating repos, and making changes, all the way to handling merge conflicts.

We’ll also discuss SQL style and use our new git workflow as an opportunity to refactor a query.

Week 3

Transforming data with dbt
We’ll learn what dbt is, and build our first dbt project together complete with tests and documentation. Along the way, we’ll learn about DAGs, deployment environments, and running jobs in production.

Week 4

Data modeling
Data models, star schemas, denormalization, Kimball — what do they all mean, and how relevant are they anyway? We’ll talk you through some of these concepts, talk about the process of designing a data model as an analytics engineer. You’ll spend time building your own data models in your dbt project.

Week 5

Advanced SQL
This week will be all about SQL! We’ll look at some of the common patterns we see as analytics engineers and implement them in our dbt project — we’ll write the SQL to resolve user identities, perform a cohort analysis, fan out (or date spine) your data, and aggregate page views to web sessions.

Week 6

Data warehouses
This week will be a lot of theory and the study of how databases work fundamentally, and how data warehouses specifically are optimized. We’ll cover everything from row vs column store to the basics of distributed computing. By the end of this lesson, you’ll understand how to tune up a data warehouse and the theory behind it.

Weeks 7 & 8

Advanced dbt
This week we’ll dive into some of the more advanced features of dbt — writing Jinja, using packages, and some of the advanced materializations available.

Weeks 9 & 10

Python for analytics engineers
If you’re brand new to python, we’ll start off by introducing some of the basics of the language.

Then, we’ll learn how to write python like an engineer — we’ll cover running python outside of a notebook, setting up virtual environments, code style, tests, and modularity. By the end of these two weeks, you’ll have all the tools at your disposal to write your own scripts, command-line tools, and even consider contributing to an open-source package!

Course Calendar

Winter 2022

  • Applications open: January 3
  • Applications close: January 14
  • Class starts: February 14 (week of)
  • Class ends: April 18 (week of)

Summer 2022

  • Applications open: April 26
  • Applications close: May 6
  • Class starts: June 6 (week of)
  • Class ends: August 8 (week of)

Fall 2022

  • Applications open: August 22
  • Applications close: September 2
  • Class starts: October 3 (week of)
  • Class ends: December 12 (week of)

How it Works

Asynchronous learning

3 hours per week | Allow additional time for extension exercises
Each week, we’ll give you access to the module for the week. A typical module includes:

Lectures: A mix of written and video content (medium selected based on the best way to communicate the ideas)

Exercises: Ways to put what we’ve learned into practice.

Additional resources: Related content that you can bookmark for later

The exact mix of lectures and exercises will vary each week, but we will aim to keep the required content within 3 hours of work for a typical student. You’ll also get the opportunity to work on extension exercises (note: they will add more time to your course load, but are valuable)

Live sessions

1 hour per week | Online (Zoom)

These live sessions, with the instructors and your cohort (capped at 10 students) are used to answer questions, discuss any challenges students encountered when completing the course work, or dive deeper on related topics. We’ll run a number of cohorts concurrently each session so that we can cater to more time zones.

AEC → YourCo meetings

1 hour per week | Optional, but recommended

Each week, we’ll provide ways for you to apply what we’ve learned to your own data infrastructure. Scheduling a weekly meeting between yourself and a member of your data (or engineering) team will give you the opportunity to discuss what you’ve learned in class.


Why do you recommend that a student has the support of their company for this course?

Most people make the transition from analyst to analytics engineer internally, so we’ve designed this course to help support this transition. This differs from many boot camps, that are designed to help someone make a career (and employer) change, often including a job search component in the course.

By targeting the course towards students who are looking to make this transition internally, we can deliver a more effective course:

Each week we include ways for students to relate what we’ve learned in the classroom to their own environment, further cementing what we’ve learned

Some companies may choose to give their employee time to complete coursework during the week, meaning students won’t feel overloaded when completing the course

Further, the feedback we’re getting from the industry is that hiring managers want this course to enable them to cast a wider net when hiring — if they can outsource their training to us, they are able to hire people that may not have previous experience with a modern data stack and be confident they’ll get up to speed quickly.

Why is most of the course content asynchronous?

We’re using a “flipped classroom” model to teach — students work through lesson content and exercises asynchronously, and use synchronous time to discuss any challenges.

Flipped classrooms support students that learn at different speeds. Some students may re-watch parts of a video that didn’t make sense to them, while others can speed through at 1.5x.

Independent work is a valuable work skill. Most things you do at work won’t be in a pairing setting, so we’re closer to real life!

Students are able to choose when to do their work at a time that suits them. Whether you get to set aside Fridays as a learning day, only have time on Tuesday nights, or will be doing your work from a timezone that only just overlaps with North America, you can still be part of the course.

But we’re also trying to keep the best parts of a classroom setting: collaboration. Students are encouraged to ask questions in our Slack, and pair with each other to work through exercises.

Contact Us

© 2021 Analytics Engineers Club. All rights reserved.

Join the club

Join our mailing list to be the first to know when registrations open, or to keep an eye on our blog.

analytics engineers club·

analytics engineers club·

analytics engineers club·

analytics engineers club·

analytics engineers club·