Enable Data is a data management company that specializes in helping businesses harness their big data through advanced analytics and machine learning solutions.
Hyderabad
INR 16.0 - 22.5 Lacs P.A.
Remote
Full Time
Experience Required: 4 to 5Years Mode of work: Remote Skills Required: The primary skillset required is a Java Full Stack Engineer, with Angular experience considered a plus. Any experience with health care and/or quality measures (i.e. HEDIS) would be ideal but probably a stretch Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within June 15th ) The candidate should have a minimum of overall with relevant 4 to 5 years of experience in Java Full Stack Develop and engineer end-to-end features of a system using Java Full Stack Development . Front End Technologies : Angular ,HTML5, CSS3, JavaScript Collaborate with cross-functional teams to deliver innovative solutions that improve client services Utilize development skills to solve challenging business problems with a cloud-first and agile mindset Stay updated with the latest technologies and leverage them to improve client services.- Ensure the quality and integrity of the system by conducting thorough testing and debugging Must To Have Skills: Java Full Stack Development. Good To Have Skills: Spring Boot, Angular. Strong understanding of cloud-first and agile mindset. Experience in developing end-to-end features of a system. Solid grasp of testing and debugging techniques. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
Remote
Full Time
Experience Required: 3 to 4Years Mode of work: Remote Skills Required: Azure Data Bricks, Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within June 15th ) 3 to 4+ years of experience with Big Data technologies Exp with Databricks is must with Python scripting and SQL knowledge Strong knowledge and experience with Microsoft Azure cloud platform. Proficiency in SQL and experience with SQL-based database systems. Experience with batch and data streaming. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage . Experience using Azure Databricks in real-world scenarios is preferred . Experience with data integration and ETL (Extract, Transform, Load) processes . Strong analytical and problem-solving skills. Good understanding of data engineering principles and best practices. Experience with programming languages such as Pyspark/Python Relevant certifications in Azure data services or data engineering are a plus. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 9.0 - 13.0 Lacs P.A.
Work from Office
Full Time
The candidate should have a minimum of overall with relevant 4 to 5 years of experience in Java Full Stack Develop and engineer end-to-end features of a system using Java Full Stack Development. Front End Technologies : Angular,HTML5, CSS3, JavaScript Collaborate with cross-functional teams to deliver innovative solutions that improve client services Utilize development skills to solve challenging business problems with a cloud-first and agile mindset Stay updated with the latest technologies and leverage them to improve client services.- Ensure the quality and integrity of the system by conducting thorough testing and debugging Must To Have Skills: Java Full Stack Development. Good To Have Skills: Spring Boot, Angular. Strong understanding of cloud-first and agile mindset. Experience in developing end-to-end features of a system. Solid grasp of testing and debugging techniques. The primary skillset required is a Java Full Stack Engineer, with Angular experience considered a plus. Any experience with health care and/or quality measures (i.e. HEDIS) would be ideal but probably a stretch
Hyderabad
INR 27.5 - 40.0 Lacs P.A.
Remote
Full Time
Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within June) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 14.0 - 18.0 Lacs P.A.
Work from Office
Full Time
Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers. Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Bachelor s or master s degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer, Software Engineer, or similar role, with a focus on building cloud-based data solutions. Strong
Hyderabad
INR 25.0 - 40.0 Lacs P.A.
Remote
Full Time
Job Title: Senior Backend Developer JavaScript & Node.js Location: Remote Job Type: Full-time Experience: Minimum Years Required 8+ Years Key Responsibilities: Develop, maintain, and optimize backend services using Node.js and JavaScript . Architect and deploy applications using AWS Lambda and Serverless Framework . Ensure efficient integration of AWS services such as Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM . Implement and manage containerized environments using Docker . Collaborate with cross-functional teams to ensure seamless application performance. Design and optimize database interactions, ensuring high availability and performance. Troubleshoot and resolve technical issues related to backend services. Implement best security practices forcloud-based applications. Required Skills & Experience: Strong expertise in Node.js & JavaScript . Deep understanding of AWS Lambda and Serverless Framework . Hands-on experience with Docker and container orchestration tools. Proven ability to work with AWS services (Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM). Strong knowledge of RESTful APIs and microservices architecture. Hands on in writing SQL Experience with CI/CD pipelines for efficient deployment. Ability to optimize backend performance and scalability. Solid understanding of security and compliance in cloud environments. Preferred Qualifications: Experience in monitoring and logging tools (AWS CloudWatch, AWS X-Ray). Familiarity with Terraform or Infrastructure-as-Code (IaC) concepts. Previous experience in high-traffic applications and scalable systems. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 8.0 - 13.0 Lacs P.A.
Work from Office
Full Time
Develop, maintain, and optimize backend services using Node.js and JavaScript . Architect and deploy applications using AWS Lambda and Serverless Framework . Ensure efficient integration of AWS services such as Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM . Implement and manage containerized environments using Docker . Collaborate with cross-functional teams to ensure seamless application performance. Design and optimize database interactions, ensuring high availability and performance. Troubleshoot and resolve technical issues related to backend services. Implement best security practices for cloud-based applications. Strong expertise in Node.js & JavaScript . Deep understanding of AWS Lambda and Serverless Framework . Hands-on experience with
Hyderabad
INR 25.0 - 40.0 Lacs P.A.
Remote
Full Time
Job Title: Senior Backend Developer JavaScript & Node.js Location: Remote Job Type: Full-time Experience: Minimum Years Required 8+ Years Key Responsibilities: Develop, maintain, and optimize backend services using Node.js and JavaScript . Architect and deploy applications using AWS Lambda and Serverless Framework . Ensure efficient integration of AWS services such as Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM . Implement and manage containerized environments using Docker . Collaborate with cross-functional teams to ensure seamless application performance. Design and optimize database interactions, ensuring high availability and performance. Troubleshoot and resolve technical issues related to backend services. Implement best security practices forcloud-based applications. Required Skills & Experience: Strong expertise in Node.js & JavaScript . Deep understanding of AWS Lambda and Serverless Framework . Hands-on experience with Docker and container orchestration tools. Proven ability to work with AWS services (Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM). Strong knowledge of RESTful APIs and microservices architecture. Hands on in writing SQL Experience with CI/CD pipelines for efficient deployment. Ability to optimize backend performance and scalability. Solid understanding of security and compliance in cloud environments. Preferred Qualifications: Experience in monitoring and logging tools (AWS CloudWatch, AWS X-Ray). Familiarity with Terraform or Infrastructure-as-Code (IaC) concepts. Previous experience in high-traffic applications and scalable systems. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 25.0 - 40.0 Lacs P.A.
Remote
Full Time
Job Title: Senior Backend Developer JavaScript & Node.js Location: Remote Job Type: Full-time Role: Individual Contributor Experience: Minimum Years Required 8+ Years Key Responsibilities: Develop, maintain, and optimize backend services using Node.js and JavaScript . Architect and deploy applications using AWS Lambda and Serverless Framework . Ensure efficient integration of AWS services such as Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM . Implement and manage containerized environments using Docker . Collaborate with cross-functional teams to ensure seamless application performance. Design and optimize database interactions, ensuring high availability and performance. Troubleshoot and resolve technical issues related to backend services. Implement best security practices forcloud-based applications. Required Skills & Experience: Strong expertise in Node.js & JavaScript . Deep understanding of AWS Lambda and Serverless Framework . Hands-on experience with Docker and container orchestration tools. Proven ability to work with AWS services (Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM). Strong knowledge of RESTful APIs and microservices architecture. Hands on in writing SQL Experience with CI/CD pipelines for efficient deployment. Ability to optimize backend performance and scalability. Solid understanding of security and compliance in cloud environments. Preferred Qualifications: Experience in monitoring and logging tools (AWS CloudWatch, AWS X-Ray). Familiarity with Terraform or Infrastructure-as-Code (IaC) concepts. Previous experience in high-traffic applications and scalable systems. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 16.0 - 25.0 Lacs P.A.
Remote
Full Time
Experience Required: 4 to 6Years Mandate Mode of work: Remote Skills Required: Azure Data Factory, SQL, Databricks, Python/Scala Notice Period : Immediate Joiners/ Permanent(Can join within July 4th 2025 ) 4 to 6 years of experience with Big Data technologies Experience with Microsoft Azure cloud platform. Experience in SQL and experience with SQL-based database systems. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage . Experience with data integration and ETL (Extract, Transform, Load) processes . Experience with programming languages such as Python Relevant certifications in Azure data services or data engineering are a plus. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderābād
INR 4.68 - 8.0 Lacs P.A.
Remote
Part Time
Experience Required: 4 to 6Years Mandate Mode of work: Remote Skills Required: Azure Data Factory, SQL, Databricks, Python/Scala Notice Period : Immediate Joiners/ Permanent(Can join within July 4th 2025 ) Design, develop, and implement scalable and reliable data solutions on the Azure platform. Collaborate with cross-functional teams to gather and analyze data requirements. Design and implement data ingestion pipelines to collect data from various sources, ensuring data integrity and reliability. Perform data integration and transformation activities, ensuring data quality and consistency. Implement data storage and retrieval mechanisms, utilizing Azure services such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Monitor data pipelines and troubleshoot issues to ensure smooth data flow and availability. Implement data quality measures and data governance practices to ensure data accuracy, consistency, and privacy. Collaborate with data scientists and analysts to support their data needs and enable data-driven insights. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. 4+ years of experience with Big Data technologies like Azure Strong knowledge and experience with Azure cloud platform, Azure Data Factory, SQL, Databricks, Python/Scala. Experience in SQL and experience with SQL-based database systems. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Experience with data integration and ETL (Extract, Transform, Load) processes. Strong analytical and problem-solving skills. Good understanding of data engineering principles and best practices. Experience with programming languages such as Python or Scala. Relevant certifications in Azure data services or data engineering are a plus.
Hyderabad
INR 27.5 - 40.0 Lacs P.A.
Remote
Full Time
Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Eventhub, Kafka, Architecture,Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within July 4th 2025) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Mandatory skills: Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 15.0 - 30.0 Lacs P.A.
Remote
Full Time
Experience Required: 5 to 7Years Mandate Mode of work: Remote Primary Skill: Azure Data Factory, SQL, Python/Scala Notice Period : Immediate Joiners/ Permanent(Can join within July 4th 2025 ) 5 to 7 years of experience with Big Data technologies Experience with Microsoft Azure cloud platform. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Data Factory. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
Hyderabad
INR 27.5 - 40.0 Lacs P.A.
Remote
Full Time
Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Eventhub, Kafka, Architecture,Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within 14th July 2025) Responsibilities Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers . Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Requirements: Bachelors or masters degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer with a focus on building cloud-based data solutions. Mandatory skills: Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks . Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team. Interested candidate can share your resume to OR you can refer your friend to Pavithra.tr@enabledata.com for the quick response.
FIND ON MAP
Company Reviews
View ReviewsBrowse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.