Join the Future of Blockchain and Crypto
Frontier Research is a technology firm which concentrates on developing a comprehensive range of interchain products and services to address the unmet needs and challenges faced by stakeholders in the MEV supply chain.
We're on the lookout for a talented Data Engineer to join our team and play a pivotal role in the ideation, design, and development of trailblazing products with a MEV focus. Bring your technical prowess to work hands-on with our products from the ground upwards, transforming ambitious concepts into practical, high-impact solutions.
We would ideally like someone in London who can work a few days a week in our centrally located office, and may be able to assist in providing relocation support if needed. However, we are open to bringing on another remote teammate for the right candidate.
What you will do:
- Design, build, and maintain highly scalable and optimized databases, both relational (such as PostgreSQL, MySQL) and columnar data, to support our data processing needs.
- Develop and maintain our data lakehouse, ensuring seamless integration of structured and semi-structured data from various sources, while implementing data governance and quality control measures.
- Develop data pipelines and ETL processes to ensure data quality, integrity, accessibility, and availability across our systems.
- Implement and optimize data processing frameworks such as Apache Beam, Google Cloud Dataflow, or Apache Spark to process large-scale datasets efficiently.
- Implement and maintain core data presentations (visualizations and dashboards).
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and implement solutions accordingly.
- Stay current with industry trends and best practices in data engineering, and recommend improvements to our data infrastructure.
- Assist with data migration and integration projects as needed.
- Document processes and maintain data engineering guidelines, ensuring adherence to data governance policies.
- Strong knowledge of relational databases (such as PostgreSQL, MySQL) and columnar datasets, with experience in database design, optimization, and maintenance.
- Experience with common data pipeline and ETL tooling, including Python data science libraries (e.g., Pandas, NumPy) and workflow management platforms like Apache Airflow.
- Experience with data processing frameworks such as Apache Beam, Google Cloud Dataflow, and/or Apache Spark.
- Previous experience in a similar role building and maintaining scaleable data systems
Nice to have:
- Experience with streaming data processing
- Experience in MEV
- A history of doing cool and difficult stuff
- Competitive full-time salary
- Semi-regular team onsites (1-2 a year)
- Work with some great minds in the MEV space
- Build a unique and competitive skillset
- Small high-performance team with no corporate bureaucracy
We don’t discriminate on the basis of race, nationality, gender identity, age, disability, or religion. If you think you have what it takes to be a successful in this role, get in touch for a chat.