Why Blue Coding?At Blue Coding, we specialize in hiring excellent developers and amazing people from all over Latin America and other parts of the world. For the past 11 years, we’ve helped cutting-edge companies in the United States and Canada build great development teams and develop great products. Large multinationals, digital agencies, Saas providers, and software consulting firms are just a few of our clients. Our team of over 150 engineers, project managers, QA, UX/UI designers, and many more is distributed in more than 10 countries across the Americas. We are a fully remote company working with a wide array of technologies, and we have expertise in every stage of the software development process.
Our team is highly connected, united, and culturally diverse, and our collaborators are involved in many initiatives around the world, from wildlife preservation to volunteering at local charities. We stand for honesty, fairness, respect, efficiency, hard work, and cooperation.
This position is open exclusively to candidates based in LATAM countries.
What are we looking for? In this opportunity, we are looking for a Senior DataOps Engineer with a strong ownership mindset to work with one of our foreign clients. This role is ideal for someone who thrives in complex data environments and enjoys building, optimizing, and scaling data infrastructure on AWS while driving best practices across DataOps and DevOps. You will play a key role in designing and maintaining cloud infrastructure that supports data pipelines, analytics workflows, and machine learning initiatives. Working closely with Data Engineering and DevOps teams, you’ll help improve reliability, scalability, and governance across the data platform. If you are fully fluent in English, proactive, communicate well, like to solve problems, and have strong attention to detail, this role might be a great fit for you! Our jobs are fully remote and you will be integrated directly into the client’s team, gaining valuable experience and forming meaningful connections.
What's unique about this job? This role offers the opportunity to build and shape a modern DataOps ecosystem from the ground up, working with a highly scalable AWS stack and real-time data processing.
You’ll operate with a high level of autonomy, directly influencing architectural decisions and best practices in a fast-moving, low-structure environment.
If you enjoy owning systems end-to-end, solving complex data challenges, and working on infrastructure that directly impacts business-critical workflows, this role will give you that exposure.
Here are some of the exciting day-to-day challenges you will face in this role:
Design, develop, and maintain data pipelines and datastores that support enterprise analytics, data science, and operational workloads. Lead and support large-scale database migration initiatives, including on-premises to cloud migrations. Monitor, analyze, and optimize the performance and stability of data layer services and platforms. Ensure data integrity, quality, and compliance across pipelines and datasets. Collaborate closely with peers across engineering, analytics, and technology teams. Guide, coach, and mentor data engineers, BI developers, and analysts. Design and implement enterprise-scale data solutions with long-term business impact. Build and maintain data processing solutions using Python and/or Scala. Work with a variety of data ingestion patterns, including SFTP, APIs, streaming, and batch processing. Design and support database models optimized for analytical and reporting use cases. Implement monitoring, alerting, and observability for data pipelines and infrastructure. Maintain clear and comprehensive documentation of data architectures, pipelines, and processes. Work within an Agile environment, collaborating through tools such as Jira and Git You will shine if you have:
5+ years of experience in DevOps or DataOps roles with a strong focus on AWS.
Hands-on experience with AWS services such as EMR (Spark), Redshift, RDS, Glue, Lambda, Kinesis, Step Functions, EventBridge, SNS/SQS, KMS, and CloudWatch.
Strong proficiency in Python and SQL.
Experience working with relational, NoSQL, and columnar databases.
Experience implementing Infrastructure as Code using Terraform or CloudFormation.
Experience designing and maintaining CI/CD pipelines.
Familiarity with data quality frameworks, observability tools, and data governance practices.
Knowledge of handling sensitive data (PII) and compliance standards such as HIPAA.
Bachelor’s degree in Computer Science, Data Science, or equivalent experience.
Proven ability to work autonomously and take ownership of infrastructure and data workflows in a production environment
It doesn’t hurt if you also have:
AWS certifications (e.g., Data Engineer – Associate, DevOps Engineer – Professional).
Experience with AWS DMS, Secrets Manager, SES, and containerization tools such as Docker.
Experience with BI tools such as Tableau or Power BI.
Experience with data observability platforms.
Advanced degree in a related field
Here are some of the perks we offer you:
Salary in USD Flexible schedule (within US Time zones) 100% Remote Work with modern AWS stack (EMR, Kinesis, Glue, etc.) High-impact role with ownership over key technical decisions Ready to learn more? Apply below!