Posted:1 day ago|
Platform:
On-site
Full Time
Power Cozmo is a growing B2B eCommerce platform focused on building scalable, data driven solutions for enterprises. Our platform relies heavily on modern cloud and big data technologies to power analytics, personalization, integrations, and operational intelligence. We are building robust data foundations that scale with the business, and we’re looking for engineers who want to grow with a startup and take ownership of core systems.
We are hiring two Big Data Engineers to design, build, and maintain scalable data
platforms and pipelines on AWS. You will work on batch and near-real-time data
processing, lakehouse architectures, and high-performance analytics systems.
This role is well-suited for engineers who enjoy end-to-end ownership, hands-on
development, and working in a fast-paced startup environment.
Data Engineering & Pipelines
• Design, develop, and maintain ETL / ELT pipelines for batch and near-real-time data processing
• Build data ingestion systems using PySpark, Kafka, webhooks, APIs, and event driven architectures
• Process structured, semi-structured, and unstructured data from multiple sources.
• Ensure data quality, reliability, monitoring, and performance optimization Big Data, Analytics & Lakehouse.
• Design and manage lakehouse architectures combining data lakes and analytical engines.
• Optimize data storage, partitioning, and query performance.
• Enable downstream analytics, reporting, and ML use cases AWS & Cloud Technologies.
• Work extensively with AWS services, including but not limited to: S3, Glue, EMR, Athena, Redshift Lambda, EC2, IAM, CloudWatch.
• Deploy, monitor, and optimize data workloads in AWS
• Apply cloud best practices for scalability, cost, and security
Databases & Data Platforms
• Design, create, and administer SQL and NoSQL databases.
• Perform schema design, indexing, performance tuning, and access control
• Work with high-performance analytical databases such as ClickHouse
• Implement and maintain Graph Databases (GraphDB / Neo4j or similar) for relationship-based use cases.
Collaboration & Ownership
• Collaborate with backend, frontend, product, and analytics teams
• Participate in architecture discussions and technical decision-making
• Take ownership of data systems in a startup environment
• Education: BTech (Computer Science / IT) or MCA Experience
• Minimum 3 years of corporate project experience as a Big Data Engineer / Data Engineer
• Hands-on experience delivering production-grade data systems
• Strong hands-on experience with PySpark
• Experience with ETL / ELT concepts and data modeling
• Experience building pipelines using Kafka, streaming systems, or webhooks
• Solid experience working with AWS cloud services
• Strong SQL skills and understanding of distributed systems
• Experience with Graph Databases (GraphDB, Neo4j, or similar)
• Hands-on experience with ClickHouse for large-scale analytical workload
POWER COZMO
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
noida, mumbai, pune, chennai, bengaluru
7.0 - 11.0 Lacs P.A.
chennai, all india
Salary: Not disclosed
12.0 - 20.0 Lacs P.A.
bengaluru
4.0 - 9.0 Lacs P.A.
chennai, all india
Salary: Not disclosed
gurugram, chennai, bengaluru
6.0 - 16.0 Lacs P.A.
10.0 - 20.0 Lacs P.A.
hyderabad
1.5 - 3.0 Lacs P.A.
hyderabad
5.0 - 9.5 Lacs P.A.
bengaluru
5.0 - 9.5 Lacs P.A.