About Terawatt Infrastructure
The once in a century transition to autonomous and electric vehicles is underway and will require a multi-trillion-dollar investment in energy and charging infrastructure, and the real estate to site it on. Terawatt is the leader in delivering large scale, turnkey charging solutions for companies rapidly deploying AV and EV fleets. Whether it’s an urban mobility hub, or a carefully located multi-fleet hub for semi-trucks, Terawatt brings the talent, capabilities, and capital to create reliable, cost-effective solutions for customers on the leading edge of the transition to the next generation of transport.
With a growing portfolio of sites across the US in urban hubs and along key logistics and transportation corridors and logistics hubs, Terawatt is building the permanent transportation and logistics infrastructure of tomorrow through a robust combination of capital, real estate, development, and site operations solutions. The company develops, finances, owns, and operates charging solutions that take the cost and complexity out of electrifying fleets.
At Terawatt, we execute humbly and with urgency to provide tailored solutions for fleets that delight our clients and support the transition of transportation.
\n
Role Description
We are seeking a highly skilled Senior Data Engineer to join our growing team. In this role, you will design and implement scalable and efficient data architectures to support our business needs. You will collaborate closely with data scientists, analysts, and other cross-functional teams to build and optimize data pipelines, ensuring that data is accessible, secure, and well-structured for analytics and reporting.
A key part of this role involves developing and maintaining data models, databases, and data lakes, while implementing robust data governance and quality assurance practices. You will drive the development of scalable data infrastructure aligned with company architecture standards and best practices.
This role also requires curiosity and a commitment to building and maintaining production data lake pipelines that transform raw time-series data into ML-ready features, training datasets, and batch predictions. This includes ensuring data quality, reproducibility, and reliable retraining so ML outputs—such as forecasts and risk scores—can be trusted by downstream systems.
Problems You Will Solve
- Turning messy operational data into reliable signals by building pipelines that transform noisy, incomplete, and high-volume time-series data into trusted datasets for analytics, product features, and ML workflows
-
Design a resilient lakehouse platform by architecting a scalable Databricks-based platform that support both streaming and batch workloads while ensuring governance, observability, and reliability
-
Enable production-ready ML pipelines by creating reproducible workflows, reliable feature datasets, and batch prediction pipelines that downstream systems can depend on
-
Enable self-service analytics and ML by building infrastructure and abstractions that allow analysts, engineers, and data scientists to independently explore and use data
-
Scale a platform for product and analytics by designing systems that support operational product features, internal reporting, and ML use cases without compromising performance or data quality
Core Responsibilities
- Architect and evolve a Databricks-based data platform that serves as the scalable foundation for product features, internal reporting, and ML workflows.
- Set technical standards for modeling raw data into clean, reliable datasets, ensuring high integrity and point-in-time accuracy for both BI and ML applications.
- Build and maintain self-service tooling and infrastructure abstractions that improve the developer experience for data producers, analysts, and data scientists.
- Design and optimize high-performance ETL/ELT pipelines using Delta Live Tables and Structured Streaming to handle seamless ingestion from diverse data sources.
- Own platform observability, testing, and proactive monitoring to ensure the performance and reliability of critical data delivery and pipeline health.
- Architect and enforce data security, compliance, and access controls by implementing Unity Catalog and IAM (Identity and Access Management) best practices across the enterprise.
- Build and maintain production-grade pipelines that transform raw data into ML-ready features, training datasets, and reliable batch predictions.
- Lead Infrastructure as Code (IaC) initiatives using Terraform and improve team productivity by identifying technical debt and automating complex deployment workflows.
- Partner with Engineering, Product, and Business teams to resolve ambiguities and ensure shipped data features are impactful, reliable, and aligned with business outcomes.
- Build and maintain a self-service data lake environment, empowering non-data engineers and stakeholders to discover, explore, and analyze data independently.
- Promote engineering excellence through code reviews, documentation, and technical standards for orchestration and testing.
Minimum Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- 6+ years in data engineering, platform development, or large-scale data systems.
- Hands-on experience with Databricks or modern lakehouse platforms and cloud platforms (AWS, GCP, or Azure).
- Experience building scalable ETL/ELT pipelines using Spark and SQL.
- Proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
- Strong understanding of data modeling, schema design, and performance optimization.
- Experience building reliable, production-grade data pipelines with a focus on data quality and observability.
- Experience supporting analytics and/or ML workflows, including preparing ML-ready datasets.
- Working knowledge of data governance, security, and access control frameworks.
- Familiarity with Infrastructure as Code (IaC) and automated deployment workflows (e.g., Terraform).
- Proven ability to collaborate across teams and contribute to technical direction.
Preferred Qualifications
- Experience working with time-series, IoT, or high-volume telemetry data systems.
- Familiarity with EV charging ecosystems, including OCPP (Open Charge Point Protocol).
- Domain experience in electric vehicles (EV), energy systems, or distributed energy resources (DERs).
- Experience building ML feature pipelines, training datasets, or batch inference workflows.
- Experience designing self-service data platforms for analysts and data scientists.
- Background in event-driven or real-time data architectures.
- Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems.
- Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems.
- Proven ability to influence technical direction and collaborate across teams.
\n
$110,000 - $135,000 a year
Compensation for this role is determined by several factors, including the cost of labor in specific geographic markets, and these ranges are intended to provide a helpful reference. The actual compensation offer will be based on the candidate’s location, skills, level of expertise and experience, and internal equity considerations. In addition to base salary, we offer a comprehensive benefits package and, where applicable, performance-based incentives.
\n
We are building a team that represents a variety of backgrounds, perspectives, and skills. At Terawatt, we continuously strive to foster inclusion, humility, energizing relationships, and belonging, and welcome new ideas. We're growing and want you to grow with us. We encourage people from all backgrounds to apply.
If a reasonable accommodation is required to fully participate in the job application or interview process, or to perform the essential functions of the position, please contact people@terawattinfrastructure.com.
Terawatt Infrastructure is an equal-opportunity employer.