Posted:1 day ago|
Platform:
On-site
Full Time
Responsibilities: 5 to 12 Years Experienced Design, develop, and deploy high-quality data processing applications and pipelines. Analyze and optimize existing data workflows, pipelines, and data integration processes . Develop highly scalable, testable, and maintainable code for data transformation and storage. Troubleshoot and resolve data-related issues and performance bottlenecks . Collaborate with cross-functional teams to understand data requirements and deliver solutions. Qualifications: Bachelor's degree or equivalent experience in Computer Science, Information Technology, or related field . Hands-on development experience with Python and Apache Spark . Strong knowledge of Big Data technologies such as Hadoop, HDFS, Hive, Sqoop, Kafka, RabbitMQ . Proficiency in working with SQL databases or relational database systems (SQL Server, Oracle) . Familiarity with NoSQL databases like MongoDB, Cassandra, or HBase . Experience with Cloud platforms (AWS, Azure Databricks, GCP) is a plus. Understanding of ETL techniques, data integration, and Agile methodologies .
Freelancing.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Chennai, Tamil Nadu, India
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed