Responsibilities:
- Build and optimize data models, data warehouses, and data lakes.
- Manage and process large datasets using distributed systems.
- Work closely with Research & Development team, data analysts, data scientists, and business teams to understand data requirements..
- Develop scalable data integration solutions using SQL, Python, and modern data engineering tools.
- Maintain data quality, lineage, security, and governance across systems.
- Implement process automation to improve system efficiency and reliability.
- Troubleshoot performance issues and ensure system availability.
- Optimize queries, implement indexing, partitioning, and performance tuning.
- Prepare and maintain documentation.
Requirement:
- Education: Bachelor's or Master degree
- Prior Years of Experience required: Minimum 3 years
Required Technical Skills:
- Strong proficiency in SQL and hands-on experience with relational and NoSQL databases.
- Experience with Python or other scripting languages.
- Solid understanding of ETL / ELT frameworks, data warehousing concepts, and data modeling.
- Experience with big data technologies (e.g., Spark, Hadoop, Kafka) is an advantage.
Required Behavioral Skills:
- Should demonstrate strong analytical thinking, clear communication.
- Problem-solving skills.
- Learning mindset to ensure reliable, secure, and high-quality database operations
Why Join Us?
- A stellar opportunity to work with the rising company
- The amazing and passionate young team, beautiful office space
- The trust of the biggest FinTech company.
- One-of-a-kind company culture and growth opportunities to accelerate your career progression
- Company-provided lunch facility
Interested candidates are requested to apply before 12th February, 2026.
Apply Now