Analytics Engineer

Prolific

Full TimeRemote — EuropePosted 20 days ago
With Prolific, we're changing how research on the internet is done. Katia and Phelim started by building a marketplace that connects researchers (from both Academia and industry) with instant, high quality, global research participants. Now, as a growing team, the bigger vision is to build the most powerful and trusted platform for behavioral research.
We've been part of Y Combinator's Summer 2019 batch, we've recently closed a $1.4M seed round, we've been growing 2.5x a year purely through word-of-mouth, we're already profitable and we have very ambitious plans. Come join our exciting company!

The Role

As a complex marketplace platform with a strong focus on data, we know we are sitting on a trove of information that can take our product, and world-class research, to the next level. We are looking for an Analytics Engineer who will bring order to our data warehouse, by providing an architecture, designing the tables, and writing the SQL to ensure that analysts across the company have access to high quality, trustworthy and easily interpretable data.
The ideal candidate will be comfortable working with large data sets, and able to combine software engineering approaches with analytics know-how to create robust, scalable data pipelines. You should have strong experience with SQL and be comfortable with Python. You will own the entirety of our data tech stack, from product-driven event analytics, through to data storage, modelling and BI tools.
You will work with our small but excellent team of data analysts to transform our raw business data into documented datasets ready to provide crucial insight! We hope you'll also be a mentor to your team members: boosting the standard of data-software engineering across the team.

You will work towards

    • Own and manage our Redshift data warehouse, Stitch ETL and our SQL scheduling tools. Ensuring that our business insight is drawn from reliable, correct and easily interpretable datasets.
    • Work with our product and engineering teams to implement new event tracking, understand user attribution, and support the development of data-intensive projects.
    • Work with our engineering team and stakeholders across the business to deliver data and insights where most needed in an automated way: i.e. into financial models, dashboards, CRM or Zendesk.
    • Cataloging and defining metrics from across the business to ensure alignment and clarity, and regular automated reporting.

    • This will likely involve some or all of the following:
    • Working with the growth and engineering teams to ensure reliable, scalable, robust architecture for our data. platform. Implementing new event tracking analytics and ensuring application designs meet reporting and analytics requirements.
    • Leading and planning implementation work in-and-around the data stack.
    • Implementing tests and assertions to ensure validity and correctness of data in the warehouse.
    • Supporting data analysts by deploying virtual machines, implementing automation, bespoke tools and data cleaning. Coaching analysts on software engineering best practices (e.g. building testing suites and CI pipelines).

Requirements

    • Experience writing high-quality analytic SQL.
    • Knowledge of data pipelines best practices.
    • Experience with Redshift, ETLs or other data-warehousing technologies.
    • An eye for detail, but can get things done quickly and correctly even in unfamiliar territory.
    • Good written communication skills, as evidenced in design documents, project plans and code reviews, etc.

    • Nice to have:
    • Software engineer experience in Python.
    • Familiarity with Snowplow, Segment, Heap or similar event-tracking/stream processing systems.
    • Understanding of common analytical techniques and statistical practices.
    • Team leading/mentoring experience.
    • Familiarity with machine learning.

    • Our Data Tech Stack
      We use Snowplow to collect event data, Stitchdata to ETL data from our production MongoDB and secondary data sources, AWS Redshift as a central store and source-of-truth for all our business data, and Dataform for data modelling, scheduling SQL, and validating the data in our warehouse. We use Redash as our primary BI tool and for ad-hoc queries, and JupyterHub as a central workspace for analysis and deeper exploration of our data.

What We Offer

    • Meaningful share option allocation and salary depending on experience.
    • Pension (employer contribution 3% of base salary).
    • Flexible working: Work equipment of your choice (paid for by us), and you can work flexibly from home or from our coworking space (also paid for by us).
    • Flexible hours: We have core hours of 10am - 3pm (with an hour for lunch), but the rest is up to you.
    • Childcare flexibility: Need to pick your child up from school? No problem.
    • 25 days holiday per year plus bank holidays (which you can switch with your religious holiday).
    • £1000 yearly budget for education, growth, and training.
    • Personal growth opportunities and career progression (e.g., learn about the startup ecosystem, mentoring from executive team, learn about psychological science and research methods).
    • Open, transparent, and inclusive culture.
    • A company committed to carbon offsetting: we donate money in your name each month to plant trees and we offset travel as an organization each year.
    • Generous maternity, paternity, and shared parental leave.