DescriptionAbout the Role
VesselBot is seeking an AI/Data Engineer to design and build intelligent, on-premise agents that support advanced data automation across the logistics and sustainability domains. These agents will power our next generation of systems—enabling automated data processing, contextual understanding, quality assessment, enrichment, and integration across complex multimodal supply chain environments.
A major advantage in this role is that VesselBot already possesses extensive logistics, operational, and sustainability datasets across modes and partners. This means you will be working with real, rich, and complex data from day one—allowing you to focus on building intelligent systems rather than collecting or cleaning foundational inputs.
This role is ideal for an engineer who wants to work at the intersection of AI, data engineering, logistics, and emissions/sustainability technologies, and who enjoys translating real operational challenges into scalable, intelligent systems deployed fully on-premise.
Key Responsibilities
You will:
- Architect and develop AI-driven agents for logistics and sustainability data processes.
- Build systems that autonomously analyze, interpret, and transform diverse operational datasets—including the extensive datasets already maintained by VesselBot.
- Ensure agents operate on-premise with high reliability, data governance, and security.
- Build and maintain backend services in Python, including asynchronous programming patterns and message-queuing systems.
- Develop and optimize APIs for real-time monitoring and operation of intelligent agents.
- Work with ML/LLM systems and integrate them into production-grade on-premise infrastructure.
- Design and implement scalable data pipelines and processing systems using our existing logistics and sustainability data assets.
- Collaborate with domain experts to encode logistics and sustainability logic into intelligent components.
- Maintain and improve CI/CD pipelines, internal tooling, and development infrastructure.
- Contribute to a reusable framework enabling rapid development and deployment of new agents across our platform.
This is a highly autonomous, builders-oriented role with significant ownership over foundational AI and data infrastructure.
RequirementsTechnical Background
- Strong Python expertise with production-level experience.
- Familiarity with ML/LLM systems and experience integrating them into applications.
- Experience with asynchronous programming and message-queuing systems (e.g., Celery, RabbitMQ, Redis queues).
- Solid understanding of API design and real-time data processing.
- Experience designing or maintaining structured/semi-structured data pipelines and backend systems.
- Comfort working with on-premise or private-cloud AI deployments.
Preferred Domain Knowledge
- Exposure to logistics, transportation, or supply chain data (shipments, carriers, schedules, routes, locations).
- Understanding of sustainability and emissions-related datasets and methodologies.
- Experience with vector databases, semantic search, embeddings, or agent-based architectures.
Working Style
- Able to design and deliver systems end-to-end with minimal oversight.
- Comfortable shaping new capabilities in evolving technical domains.
- Strong communication, documentation, and collaboration skills.
BenefitsWhat We Offer
- The opportunity to lead the development of core on-premise AI capabilities for a logistics-technology platform.
- Immediate access to extensive datasets that allow rapid prototyping, experimentation, and deployment of intelligent agents.
- A role blending autonomy, innovation, and real-world operational impact.
- Competitive compensation and strong long-term growth opportunities.
- Participation in a company-wide bonus scheme tied to performance.