
About this role
Full Time Senior Data Engineer in AI at Keenfolks in Teletrabajo. Apply directly through the link below.
At a glance
- Work mode
- Hybrid
- Employment
- Full Time
- Location
- Teletrabajo
- Experience
- Senior · 7+ years
Core stack
- Data Engineering
- Observability
- Apache Spark
- Optimization
- Architecture
- Performance
- Kubernetes
- Terraform
- Cassandra
- Snowflake
- Analytics
- BigQuery
- Redshift
- MongoDB
- Airflow
- Python
- Docker
- Pandas
- Design
- Scala
- Azure
- CI/CD
- Redis
- Kafka
- NumPy
- Agile
- SOLID
- Java
- SQL
- AWS
Quick answers
Is this Data Engineer job remote?
Yes, this position is fully remote (Teletrabajo).
What skills are required?
Data Engineering, Observability, Apache Spark, Optimization, Architecture, Performance, Kubernetes, Terraform, Cassandra, Snowflake, and more.
Keenfolks is hiring for this role. Visit career page
Teletrabajo
SR. DATA ENGINEER (freelancer)
👀 About the role
You’ll be responsible for designing and implementing scalable, high-performance data systems, while helping define best practices across the organization.
From building pipelines to enabling advanced analytics and ML use cases, your work will directly impact how we leverage data across global clients.
🧩 Responsibilities
Design and implement scalable data architectures (ETL/ELT)
Build and maintain data pipelines for ingestion, processing, and consumption
Develop APIs and data services to support analytics and product use cases
Work with AWS and/or Azure to deploy modern data platforms (data lakes, warehouses, streaming)
Automate data workflows with a focus on resilience, observability, and performance
Collaborate with Data Scientists, Analysts, and Product teams to bring models into production
Participate in architecture decisions and code reviews
Explore and implement emerging trends (Data Mesh, MLOps, RAG, etc.) 🚀
🧠 What we’re looking for
5–7+ years of experience in Data Engineering or backend/data-focused development
Strong expertise in Python (Pandas, NumPy, SQLAlchemy)
Advanced SQL skills (optimization, complex queries, performance tuning)
Experience with Apache Spark and Airflow
Solid experience with data warehouses (Snowflake, BigQuery, Redshift)
Strong knowledge of AWS and/or Azure ecosystems
Experience building production-ready pipelines with CI/CD and Docker
Experience working in agile environments
✨ Nice to have
Experience with Kafka or real-time streaming
Knowledge of Scala or Java (for Spark)
Experience with NoSQL databases (MongoDB, Redis, Cassandra)
Familiarity with Kubernetes and Terraform
Exposure to MLOps or AI systems (RAG, ML pipelines)
Experience in startups or fast-paced environments
⚡️ Key skills
Scalable architecture & performance optimization
Strong problem-solving and systems thinking
Ownership and autonomy
Clear communication and ability to work cross-functionally