Description Design, develop, and maintain scalable and robust data solutions in the cloud using Apache Spark and Databricks. Gather and analyse data requirements from business stakeholders and identify opportunities for data-driven insights. Build and optimize data pipelines for data ingestion, processing, and integration using Spark and Databricks. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Collaborate with cross-functional teams to design and implement data models, schemas, and storage solutions. Optimize data processing and analytics performance by tuning Spark jobs and leveraging Databricks features. Provide technical guidance and expertise to junior data engineers and developers. Stay up to date with emerging trends and technologies in cloud computing, big data, and data engineering. Contribute to the continuous improvement of data engineering processes, tools, and best practices. Benefits Bachelor’s or master’s degree in computer science, engineering, or a related field. 10+ years of experience as a Data Engineer, Software Engineer, or similar role, with a focus on building cloud-based data solutions. Strong experience with cloud platforms such as Azure or AWS. Proficiency in Apache Spark and Databricks for large-scale data processing and analytics. Experience in designing and implementing data processing pipelines using Spark and Databricks. Strong knowledge of SQL and experience with relational and NoSQL databases. Experience with data integration and ETL processes using tools like Apache Airflow or cloud-native orchestration services. Good understanding of data modelling and schema design principles. Experience with data governance and compliance frameworks. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills to work effectively in a cross-functional team.
Description Develop, maintain, and optimize backend services using Node.js and JavaScript. Architect and deploy applications using AWS Lambda and Serverless Framework. Ensure efficient integration of AWS services such as Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM. Implement and manage containerized environments using Docker. Collaborate with cross-functional teams to ensure seamless application performance. Design and optimize database interactions, ensuring high availability and performance. Troubleshoot and resolve technical issues related to backend services. Implement best security practices for cloud-based applications. Requirements Strong expertise in Node.js & JavaScript. Deep understanding of AWS Lambda and Serverless Framework. Hands-on experience with Docker and container orchestration tools. Proven ability to work with AWS services (Cognito, DynamoDB, RDS, ECS, ECR, EC2, IAM). Strong knowledge of RESTful APIs and microservices architecture. Hands on in writing SQL Experience with CI/CD pipelines for efficient deployment. Ability to optimize backend performance and scalability. Solid understanding of security and compliance in cloud environments.
Description Experience Required: 4 to 6Years Mandate Mode of work: Remote Skills Required: Azure Data Factory, SQL, Databricks, Python/Scala Notice Period : Immediate Joiners/ Permanent(Can join within July 4th 2025 ) Design, develop, and implement scalable and reliable data solutions on the Azure platform. Collaborate with cross-functional teams to gather and analyze data requirements. Design and implement data ingestion pipelines to collect data from various sources, ensuring data integrity and reliability. Perform data integration and transformation activities, ensuring data quality and consistency. Implement data storage and retrieval mechanisms, utilizing Azure services such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Monitor data pipelines and troubleshoot issues to ensure smooth data flow and availability. Implement data quality measures and data governance practices to ensure data accuracy, consistency, and privacy. Collaborate with data scientists and analysts to support their data needs and enable data-driven insights. Requirements Bachelor’s degree in computer science, Engineering, or a related field. 4+ years of experience with Big Data technologies like Azure Strong knowledge and experience with Azure cloud platform, Azure Data Factory, SQL, Databricks, Python/Scala. Experience in SQL and experience with SQL-based database systems. Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Experience with data integration and ETL (Extract, Transform, Load) processes. Strong analytical and problem-solving skills. Good understanding of data engineering principles and best practices. Experience with programming languages such as Python or Scala. Relevant certifications in Azure data services or data engineering are a plus.
Description Enable Data Incorporated is currently seeking a skilled and experienced Azure Data Engineer to join our dynamic team. As a leading provider of advanced application, data, and cloud engineering services, Enable Data has developed deep expertise across various industries. We work closely with our customers to leverage modern solutions and technologies to drive increased value across their business ecosystem. As an Azure Data Engineer, you will play a crucial role in designing, building, and maintaining scalable and reliable data solutions on the Microsoft Azure platform. You will work closely with cross-functional teams to understand data requirements, design and implement data ingestion pipelines, perform data integration and transformation, and implement data storage and retrieval mechanisms. You will also be responsible for monitoring data pipelines and implementing data quality and governance measures. This is an exciting opportunity for a talented Data Engineer who is passionate about working with Azure technologies and wants to contribute to the success of our clients. If you have strong technical skills, a deep understanding of data engineering principles, and hands-on experience with Microsoft Azure, we would love to hear from you. Responsibilities Design, develop, and implement scalable and reliable data solutions on the Microsoft Azure platform. Collaborate with cross-functional teams to gather and analyze data requirements. Design and implement data ingestion pipelines to collect data from various sources, ensuring data integrity and reliability. Perform data integration and transformation activities, ensuring data quality and consistency. Implement data storage and retrieval mechanisms, utilizing Azure services such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Monitor data pipelines and troubleshoot issues to ensure smooth data flow and availability. Implement data quality measures and data governance practices to ensure data accuracy, consistency, and privacy. Collaborate with data scientists and analysts to support their data needs and enable data-driven insights. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience with Big Data technologies Strong knowledge and experience with Microsoft Azure cloud platform. Proficiency in SQL and experience with SQL-based database systems. Experience with batch and data streaming. Proficiency in data processing frameworks such as Apache Spark, Apache Hadoop, or cloud-native data processing services (Azure Data Lake, Azure Data factory, Azure Databricks, Azure Synapse, Snowflake, CosmosDB) Hands-on experience with Azure data services, such as Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Experience using Azure Databricks in real-world scenarios. Experience with data integration and ETL (Extract, Transform, Load) processes. Strong analytical and problem-solving skills. Good understanding of data engineering principles and best practices. Experience with programming languages such as Python or Scala. Relevant certifications in Azure data services or data engineering are a plus.
As a talented Web Application Developer at Enable Data Incorporated, you will play a crucial role in developing high-quality web applications that enhance user experience. You will work closely with cross-functional teams to bring new features to life while ensuring the responsiveness and performance of applications. Your responsibilities will include developing and maintaining robust web applications using modern technologies, collaborating with designers and product managers to create seamless user experiences, and writing clean, scalable code following best practices. Your expertise in HTML, CSS, and JavaScript, along with experience in frameworks like React, Angular, or Vue.js, will be invaluable in optimizing applications for speed and scalability. You will participate in code reviews to maintain high code quality and troubleshoot applications to enhance performance. Staying updated with emerging web development trends and technologies will be essential in this role. To qualify for this position, you should have a Bachelor's degree in Computer Science, Software Engineering, or a related field, along with at least 5 years of experience in web application development. Proficiency in server-side languages like Node.js, Python, or PHP, familiarity with database systems such as MySQL, PostgreSQL, or MongoDB, and a strong understanding of web standards and best practices are required. Experience in agile development methodologies, excellent problem-solving skills, and attention to detail will set you up for success in this role.,
Description Experience Required: 8+Years Mode of work: Remote Skills Required: Azure DataBricks, Eventhub, Kafka, Architecture, Azure Data Factory, Pyspark, Python, SQL, Spark Notice Period : Immediate Joiners/ Permanent/Contract role (Can join within September 29th 2025) Translate business rules into technical specifications and implement scalable data solutions. Manage a team of Data Engineers and oversee deliverables across multiple markets. Apply performance optimization techniques in Databricks to handle large-scale datasets. Collaborate with the Data Science team to prepare datasets for AI/ML model training. Partner with the BI team to understand reporting expectations and deliver high-quality datasets. Perform hands-on data modeling, including schema changes and accommodating new data attributes. Implement data quality checks before and after data transformations to ensure reliability. Troubleshoot and debug data issues, collaborating with source system/data teams for resolution. Contribute across project phases: requirement analysis, development, code review, SIT, UAT, and production deployment. Utilize GIT for version control and manage CI/CD pipelines for seamless deployment across environments. Adapt to dynamic business requirements and ensure timely delivery of solutions. Requirements Strong expertise in Azure Databricks, PySpark, and SQL. Proven experience in data engineering leadership and handling cross-market deliverables. Solid understanding of data modeling and ETL/ELT pipelines. Hands-on experience with performance optimization in big data processing. Proficiency in Git, CI/CD pipelines, and cloud-based deployment practices. Strong problem-solving and debugging skills with large, complex datasets. Excellent communication skills and ability to collaborate with cross-functional teams.