Jobs
Interviews

3344 Big Data Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

12 - 16 Lacs

Bengaluru

Work from Office

- Gen AI Experience4-12 yearsWork LocationChennai/ Bangalore Mandatory SkillsGen AI, LLM, RAG, Lang chain, Llama, AI/ML, Deep Learning, Python, Tensorflow, Pytorch, Pandas, Prompt Engineering, Vector DB, MLOpsPreferred SkillsAWS or GCP or Azure Cloud, GPT-4, SQL, Fast API/ API Development, Docker/ Kubernetes, Hadoop, Spark or Apache Flink Data Pipeline, Banking exposure Educational BackgroundBachelors or Masters degree in Computer Science, Artificial Programming LanguagesProficiency in Python, R, API development - Python/Fast API. Experience with libraries and frameworks such as TensorFlow, PyTorch, Keras etc. Gen AI & RAGExperinced in GEN AI Models, Langchain/Langflow and Prompt Engineeri Machine LearningStrong understanding of machine learning algorithms, Deep Learnin Data ScienceExperience with data analysis, data visualization, and statistical modelin Big Data TechnologiesFamiliarity with big data processing frameworks like Hadoop, Sp Cloud PlatformsExperience with cloud services such as AWS, Google Cloud, or Azure Awareness on applied statistics, Experinced in Algorithms, Model development / evalua Exposure to ML Ops & Awareness of Containerisation with Docker/ Kubernetes Problem-SolvingStrong analytical and problem-solving skills with the ability to think c CommunicationExcellent verbal and written communication skills, with the ability to Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Generative AI. Experience5-8 Years.

Posted 1 month ago

Apply

6.0 - 11.0 years

32 - 40 Lacs

Hyderabad, Pune, Chennai

Hybrid

Data Software Engineer - Spark, Python, (AWS ,Kafka or Azure Databricks or GCP) Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks,GCP. 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Hyderabad

Work from Office

As part of our strategic initiative to build a centralized capability around data and cloud engineering, we are establishing a dedicated Azure Cloud Data Engineering practice. This team will be at the forefront of designing, developing, and deploying scalable data solutions on cloud primarily using Microsoft Azure platform. The practice will serve as a centralized team, driving innovation, standardization, and best practices across cloud-based data initiatives. New hires will play a pivotal role in shaping the future of our data landscape, collaborating with cross-functional teams, clients, and stakeholders to deliver impactful, end-to-end solutions. Primary Responsibilities: Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure Design and develop Azure Databricks processes using PySpark/Spark-SQL Design and develop orchestration jobs using ADF, Databricks Workflow Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve Develop and maintain CI/CD processes using Jenkins, GitHub, Github Actions etc. Building test framework for the Databricks notebook jobs for automated testing before code deployment Design and build POCs to validate new ideas, tools, and architectures in Azure Continuously explore new Azure services and capabilities; assess their applicability to business needs. Create detailed documentation for cloud processes, architecture, and implementation patterns Work with data & analytics team to build and deploy efficient data engineering processes and jobs on Azure cloud Prepare case studies and technical write-ups to showcase successful implementations and lessons learned Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture Contribute to full lifecycle project implementations, from design and development to deployment and monitoring Ensure solutions adhere to security, compliance, and governance standards Monitor and optimize data pipelines and cloud resources for cost and performance efficiency Identifies solutions to non-standard requests and problems Support and maintain the self-service BI warehouse Mentor and support existing on-prem developers for cloud environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 4+ years of overall experience in Data & Analytics engineering 4+ years of experience working with Azure, Databricks, and ADF, Data Lake Solid experience working with data platforms and products using PySpark and Spark-SQL Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc. In-depth understanding of Azure architecture & ability to come up with efficient design & solutions Highly proficient in Python and SQL Proven excellent communication skills Preferred Qualifications Snowflake, Airflow experience Power BI development experience Experience or knowledge of health care concepts – E&I, M&R, C&S LOBs, Claims, Members, Provider, Payers, Underwriting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #NIC External Candidate Application Internal Employee Application

Posted 1 month ago

Apply

5.0 - 9.0 years

25 - 32 Lacs

Pune, Chennai, Coimbatore

Work from Office

Job Description : We are seeking an experienced Data Engineer with expertise in Big Data technologies and a strong background in distributed computing. The ideal candidate will have a proven track record of designing, implementing, and optimizing scalable data solutions using tools like Apache Spark, Python, and various cloud-based platforms. Key Responsibilities : Experience : 5-12 years of hands-on experience in Big Data and related technologies. Distributed Computing Expertise : Deep understanding of distributed computing principles and their application in real-world data systems. Apache Spark Mastery : Extensive experience in leveraging Apache Spark for building large-scale data processing systems. Python Programming : Strong hands-on programming skills in Python, with a focus on data engineering and automation. Big Data Ecosystem Knowledge : Proficiency with Hadoop v2, MapReduce, HDFS, and Sqoop for managing and processing large datasets. Stream Processing Systems : Proven experience in building and optimizing stream-processing systems using technologies like Apache Storm or Spark Streaming. Messaging Systems : Experience with messaging and event streaming technologies, such as Kafka or RabbitMQ , for handling real-time data. Big Data Querying : Solid understanding of Big Data querying tools such as Hive and Impala for querying distributed data sets. Data Integration : Experience in integrating data from diverse sources like RDBMS (e.g., SQL Server, Oracle), ERP systems , and flat files . SQL Expertise : Strong knowledge of SQL, including advanced queries, joins, stored procedures, and relational schemas. NoSQL Databases : Hands-on experience with NoSQL databases like HBase , Cassandra , and MongoDB for handling unstructured data. ETL Frameworks : Familiarity with various ETL techniques and frameworks for efficient data transformation and integration. Performance Optimization : Expertise in performance tuning and optimization of Spark jobs to handle large-scale datasets effectively. Cloud Data Services : Experience working with cloud-based data services such as AWS , Azure , Databricks , or GCP . Team Leadership : Proven ability to lead and mentor teams effectively, ensuring collaboration, growth, and project success. Big Data Solutions : Strong experience in designing and implementing comprehensive Big Data solutions that are scalable, efficient, and reliable. Agile Methodology : Practical experience working within Agile frameworks to deliver high-quality data solutions in a fast-paced environment. Please note: The role is having some F2F events so please do not apply from other locations, please be assured your resumes will be highly confidential, and will not be taken ahead without your consent.

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Pune

Work from Office

Qualification: Bachelor's or master's degree in computer science, IT, or related field. Tasks Facilitate Agile ceremonies and lead Scrum practices. Support the Product Owner in backlog management and team organization. Promote Agile best practices (Scrum, SAFe) and continuous delivery improvements. Develop and maintain scalable data pipelines using AWS and Databricks (secondary focus). Collaborate with architects and contribute to solution design (support role). Occasionally travel for global team collaboration. Scrum Master or Agile team facilitation experience. Familiarity with Python and Databricks (PySpark, SQL). Good AWS cloud exposure (S3, EC2 basics Good to Have: Certified Scrum Master (CSM) or equivalent. Experience with ETL pipelines or data engineering concepts. Multi-cultural team collaboration experience. Software Skills: JIRA Confluence Python (basic to intermediate) Databricks (basic)

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Pune

Work from Office

Qualification: Bachelor's or master's degree in computer science, IT, or related field. Roles Responsibilities: Technical Role: Architect and build scalable data pipelines using AWS and Databricks. Integrate data from sensors (Cameras, Lidars, Radars). Deliver proof-of-concepts and support system improvements. Ensure data quality and scalable design in solutions. Strong Python, Databricks (SQL, PySpark, Workflows), and AWS skills. Solid leadership and mentoring ability. Agil e development experience. Additional Skill: Good to Have: AWS/Databricks certifications. Experience with Infrastructure as Code (Terraform/CDK). Exposure to machine learning data workflows. Software Skills: Python Databricks (SQL, PySpark, Workflows) AWS (S3, EC2, Glue) Terraform/CDK (good to have)

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Coimbatore

Work from Office

About the job : Exp :5+yrs NP : Imm-15 days Rounds : 3 Rounds (Virtual) Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks Job Description : The Role : - Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. - Constructing infrastructure for efficient ETL processes from various sources and storage systems. - Leading the implementation of algorithms and prototypes to transform raw data into useful information. - Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. - Creating innovative data validation methods and data analysis tools. - Ensuring compliance with data governance and security policies. - Interpreting data trends and patterns to establish operational alerts. - Developing analytical tools, programs, and reporting mechanisms - Conducting complex data analysis and presenting results effectively. - Preparing data for prescriptive and predictive modeling. - Continuously exploring opportunities to enhance data quality and reliability. - Applying strong programming and problem-solving skills to develop scalable solutions. Requirements : - Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) - 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. - High proficiency in Scala/Java and Spark for applied large-scale data processing. - Expertise with big data technologies, including Spark, Data Lake, and Hive

Posted 1 month ago

Apply

4.0 - 7.0 years

25 - 30 Lacs

Bengaluru

Hybrid

Looking For Immediate joiners Interested candidates, kindly revert with your updated resume to 'lavanya.n@miqdigital.com' Role: Data Management- Senior Software Engineer Location: Bangalore What you'll do We're MiQ, a global programmatic media partner for marketers and agencies. Our people are at the heart of everything we do, so you will be too. No matter the role or the location, were all united in the vision to lead the programmatic industry and make it better. As an SSE in our Technology department, you require Hands-on experience with Big Data technologies such as Databricks, Snowflake, EMR, Trino, Athena, StarTree, SageMaker Studio etc., with a strong foundation in data engineering concepts. Proficient in data processing and transformation using PySpark, SQL, and familiarity with at least one JVM-based language(Java/Scala/Kotlin). Familiarity with microservice integration in data systems; understands basic principles of interoperability and service communication. Solid experience in data pipeline development and familiarity with orchestration frameworks (e.g., Airflow, DBT, etc.), with an ability to build scalable and reliable ETL workflows. Exposure to MLOps/DataOps practices, with contributions to the rollout or maintenance of production pipelines. Knowledge of observability framework & and practices to support platform reliability and troubleshooting. Working knowledge of any observability tools (e.g., Prometheus, Grafana, Datadog) is highly desirable. Experience assisting in ETL optimization, platform issue resolution, and performance tuning in collaboration with other engineering teams. Good understanding of access management, including RBAC, ABAC, PBAC and familiarity with auditing & compliance basics. Practical experience with cloud infrastructure (AWS preferred), including EC2, S3, IAM, VPC basics, and Terraform or similar IaC tools. Understanding of CI/CD pipelines and ability to contribute to release automation, deployment strategies, and system testing. Interest in data governance, having working exposure to cataloging tools such as Unity Catalog, Amundsen, or Apache Atlas would be great Strong problem-solving skills with a collaborative mindset and passion for exploring AI tools, frameworks, and emerging technologies in the data space. Demonstrates ownership, initiative, and curiosity while contributing to research, platform improvements, and code quality standards. Who are your stakeholders 1. Business Analyst 2. Data Engineers 3. Data Scientist What you'll bring Technical Expertise: Deep understanding of the latest Big Data technologies (Spark Engines, Sql Engines like Databricks, Apache pinot, EMR, etc) and proficiency in building and managing complex data pipelines. Platform Optimisation: Experience in optimising platform performance and cost management, ensuring scalable solutions that meet organisational needs without exceeding budget. Innovation and R&D: A forward-thinking mindset with a passion for exploring new data technologies, continuously seeking ways to enhance platform capabilities and efficiency. Infrastructure Management: Proven experience managing cloud-based infrastructures, including networking, deployment, and monitoring, while ensuring reliability and high availability. Governance and Metadata Management: Knowledge of data governance frameworks, ensuring proper data cataloging, lineage, and metadata management to drive data quality and transparency. Proactive Problem Solver: Strong analytical skills, with a solution-oriented approach to overcoming technical challenges and finding innovative solutions for complex data problems. Cost Efficiency: Ability to implement cost optimisation strategies and ensure resource utilisation is efficient, helping the team minimise waste and maximise value. Security Best Practices: Implementing best practices in data security, including encryption, hashing, key management, and access controls, to protect sensitive data across platforms.

Posted 1 month ago

Apply

7.0 - 12.0 years

25 - 32 Lacs

Bengaluru

Work from Office

Please find brief job details in attachment - Mandatory skills:- Bigdata and ETL Design and Development experience with SPARK or SCALA Programming, Oracle / PL SQL, AWS, Python, Teradata data warehouses and Cloudera Hadoop Exp Level:- 7-12 Years Joining Time:- Immediate to 15 Days only Mode Of Interview:- 2 Virtual rounds Duration of Project:- Longer Term Bangalore Office Hybrid 40% to be in office (Min 2 to 3 Days work from office) Shift Timing 8am-5pm IST (Free CAB Facility will be provided both ways) Location is Bangalore ( Address : WeWorks NXT Tower, 1, Manyata Tech Park Rd, MS Ramaiah North City, Manayata Tech Park, Thanisandra,

Posted 1 month ago

Apply

9.0 - 14.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 1 month ago

Apply

5.0 - 8.0 years

0 - 3 Lacs

Chennai, Coimbatore, Bengaluru

Work from Office

Position: Big Data Engineer Experience: 4-8 years Location: Chennai, Bangalore, Coimbatore and Kolkata Mandatory Skills: Big Data Technologies, Python. PySpark SQL, JavaSpark

Posted 1 month ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform. The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions.The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization.This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets. Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting. Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink. Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset. Good understanding of open table formats like Delta and Iceberg. Scale data quality frameworks to ensure data accuracy and reliability. Build data lineage tracking solutions for governance, access control, and compliance. Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms. Improve system stability, monitoring, and observability to ensure high availability ofthe platform. Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack. Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment. Qualifications: Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms. Expertise in big data architectures using Databricks, Trino, and Debezium. Strong experience with streaming platforms, including Confluent Kafka. Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment. Hands-on experience implementing data quality checks using Great Expectations. Deep understanding of data lineage, metadata management, and governancepractices. Strong knowledge of query optimization, cost efficiency, and scaling architectures. Familiarity with OSS contributions and keeping up with industry trends in dataengineering.Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges. Excellent communication and collaboration skills to work effectively withcross-functional teams.Ability to lead large-scale projects in a fast-paced, dynamic environment. Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products.

Posted 1 month ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

About the Role: We are looking for an Associate Architect with atleast 9+ years of experience to help scale andmodernize Myntra's data platform The ideal candidate will have a strong background inbuilding scalable data platforms using a combination of open-source technologies andenterprise solutions The role demands deep technical expertise in data ingestion, processing, serving, andgovernance, with a strategic mindset to scale the platform 10x to meet the ever-growing dataneeds across the organization This is a high-impact role requiring innovation, engineering excellence and system stability,with an opportunity to contribute to OSS projects and build data products leveragingavailable data assets Key Responsibilities: Design and scale Myntra's data platform to support growing data needs acrossanalytics, ML, and reporting Architect and optimize streaming data ingestion pipelines using Debezium, Kafka(Confluent), Databricks Spark and Flink Lead improvements in data processing and serving layers, leveraging DatabricksSpark, Trino, and Superset Good understanding of open table formats like Delta and Iceberg Scale data quality frameworks to ensure data accuracy and reliability Build data lineage tracking solutions for governance, access control, and compliance Collaborate with engineering, analytics, and business teams to identify opportunitiesand build / enhance self-serve data platforms Improve system stability, monitoring, and observability to ensure high availability ofthe platform Work with open-source communities and contribute to OSS projects aligned withMyntras tech stack Implement cost-efficient, scalable architectures for handling 10B+ daily events in acloud environment Education: Bachelor's or Masters degree in Computer Science, Information Systems, or arelated field. Experience: 9+ years of experience in building large-scale data platforms Expertise in big data architectures using Databricks, Trino, and Debezium Strong experience with streaming platforms, including Confluent Kafka Experience in data ingestion, storage, processing, and serving in a cloud-basedenvironment Hands-on experience implementing data quality checks using Great Expectations Deep understanding of data lineage, metadata management, and governancepractices Strong knowledge of query optimization, cost efficiency, and scaling architectures Familiarity with OSS contributions and keeping up with industry trends in dataengineering Soft Skills: Strong analytical and problem-solving skills with a pragmatic approach to technicalchallenges Excellent communication and collaboration skills to work effectively withcross-functional teams Ability to lead large-scale projects in a fast-paced, dynamic environment Passion for continuous learning, open-source collaboration, and buildingbest-in-class data products

Posted 1 month ago

Apply

7.0 - 12.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Amazon Web Services (AWS)Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the team in implementing cutting-edge technologies- Conduct regular code reviews and provide constructive feedback- Stay updated with the latest industry trends and technologies Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of distributed computing principles- Experience in building scalable and efficient data pipelines- Proficient in Python programming language- Hands-on experience with big data technologies like Hadoop and Spark- Good To Have Skills: Experience with Apache Kafka Additional Information:- The candidate should have a minimum of 7.5 years of experience in PySpark- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time educationummary:As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in Hyderabad. You will play a crucial role in the development and implementation of software solutions. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language)- Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information:- The candidate should have a minimum of 12 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Analytics Good to have skills : Microsoft SQL Server, Python (Programming Language), AWS RedshiftMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Senior Analyst, Data Engineering, you will be part of the Data and Analytics team, responsible for developing and delivering high-quality data assets and managing data domains for Personal Banking customers and colleagues. You will bring expertise in data handling, curation, and conformity, and support the design and development of data solutions that drive business value. You will work in an agile environment to build scalable and reliable data pipelines and platforms within a complex enterprise. Roles & Responsibilities:Hands-on development experience in Data Warehousing and/or Software Development.Utilize tools and best practices to build, verify, and deploy data solutions efficiently.Perform data integration and sourcing activities across various platforms.Develop data assets to support optimized analysis for customer and regulatory outcomes.Provide ongoing support for data platforms, including problem and incident management.Collaborate in Agile software development environments using tools like GitHub, Confluence, and Rally.Support continuous improvement and innovation in data engineering practices. Professional & Technical Skills: Must To Have Skills: Experience with cloud technologies, especially AWS (S3, Redshift, Airflow).Proficiency in DevOps and DataOps tools such as Jenkins, Git, and Erwin.Advanced skills in SQL and Python.Working knowledge of UNIX, Spark, and Databricks. Additional Information:Position:Senior Analyst, Data EngineeringReports to:Manager, Data EngineeringDivision:Personal BankGroup:3Industry/Domain Skills: Experience in Retail Banking, Business Banking, or Wealth Management preferred Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), AWS ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : Any technical graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using PySpark. Your typical day will involve working with PySpark, Oracle Procedural Language Extensions to SQL (PLSQL), and other related technologies to develop and maintain applications. Key Responsibilities:-Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions-Build and operate very large data warehouses or data lakes.-ETL optimization, designing, coding, & tuning big data processes using Apache Spark.-Build data pipelines & applications to stream and process datasets at low latencies.-Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Professional & Technical Skills: -Minimum of 1 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark-Minimum of 3 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery.-Minimum 2 years of Experience in one or more programming languages Python, Java, Scala- Experience using airflow for the data pipelines in min 1 project-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions.- This position is based at our Hyderabad office.-Resource is willing to work in B shift 12 to 10pm Qualification Any technical graduation

Posted 1 month ago

Apply

7.0 - 12.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Amazon Web Services (AWS)Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in Hyderabad. You will play a crucial role in the development and implementation of software solutions. Key ResponsibilitiesWork on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience:Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Minimum of 2 years of experience years in real time streaming using Kafka/KinesisMinimum 4 year of Experience in one or more programming languages Python, Java, Scala.Experience using airflow for the data pipelines in min 1 project.1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes:Ready to work in B Shift (12 PM 10 PM) A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Educational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information:Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), Amazon Web Services (AWS)Minimum 7.5 year(s) of experience is required Educational Qualification : Any Graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using PySpark. Your typical day will involve working with PySpark, Oracle Procedural Language Extensions to SQL (PLSQL), and other related technologies to develop and maintain applications. Key Responsibilities:-Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions-Build and operate very large data warehouses or data lakes.-ETL optimization, designing, coding, & tuning big data processes using Apache Spark.-Build data pipelines & applications to stream and process datasets at low latencies.-Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Professional & Technical Skills: -Minimum of 1 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark-Minimum of 3 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery.-Minimum 2 years of Experience in one or more programming languages Python, Java, Scala- Experience using airflow for the data pipelines in min 1 project-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions.- This position is based at our Hyderabad office.-Resource is willing to work in B shift 12 to 10pm Qualification Any Graduation

Posted 1 month ago

Apply

15.0 - 20.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Power Business Intelligence (BI), Microsoft Azure DatabricksMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Your role will require you to analyze requirements, propose solutions, and contribute to the continuous improvement of the data platform, making it a dynamic and engaging work environment. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Microsoft Power Business Intelligence (BI), Microsoft Azure Databricks.- Strong understanding of data integration techniques and best practices.- Experience with data modeling and database design.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 25.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Engineering Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :The Data Mesh Expert is responsible for supporting the design of the conceptual frameworks for data contract management within the SSDP (Sef-Serve Data Platform based on Databricks). This expert will define how data quality is governed and formalize the expectations between data producers and consumers by establishing data contracts as an integral part of a data product. A key responsibility is to test this proposal through a Proof-of-Concept implementation on the SSDP (Sef-Serve Data Platform based on Databricks) and EDC and to find common acceptance of the framework such that the finalized result is a reusable pattern for all data domains. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate workshops and discussions to gather requirements and feedback from stakeholders.- Mentor junior professionals in best practices and emerging technologies. Professional & Technical Skills: - Data Governance Expertise:Deep knowledge of data quality management principles and data contract lifecycle management.- Data Mesh & Architecture:Strong understanding of Data Mesh principles and experience in designing conceptual solutions for data platforms.- Technical Proficiency:Familiarity with data platform technologies and data catalogs (like Atlan) to design and oversee a Proof-of-Concept. Understanding of how contracts can be validated in data pipelines and CI/CD processes.- Stakeholder Management:Ability to coordinate with and gather feedback from various data domain teams and persuade them on the final concept. Ability to collaborate effectively with the Data Mesh Enablement Team for the final handover of deliverables. Additional Information:- The candidate should have minimum 15 years of experience in Data Engineering.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Your role will require you to analyze requirements, propose solutions, and contribute to the overall strategy of the data platform, making it a dynamic and impactful position within the team. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and best practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance frameworks and compliance standards.- Ability to work with large datasets and perform data analysis.- Profound in Databricks, delivered on a project before. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark, Python (Programming Language), Amazon Web Services (AWS) Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the application development process effectively.- Ensure seamless communication within the team and stakeholders. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark, Amazon Web Services (AWS), Python (Programming Language).- Strong understanding of data processing and analysis.- Experience in designing and implementing scalable applications.- Proficient in troubleshooting and problem-solving in application development. Additional Information:- The candidate should have a minimum of 12 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 25.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Highly experienced Senior Databricks Expert to provide critical technical guidance and support for the productionization of key data products. This role requires a deep understanding of the Databricks Lakehouse Platform and a pragmatic approach to implementing robust, scalable, and performant solutions. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate workshops and discussions to gather requirements and feedback from stakeholders.- Mentor junior professionals in best practices and emerging technologies. Professional & Technical Skills: This role requires a deep understanding of the Databricks Lakehouse Platform and a pragmatic approach to implementing robust, scalable, and performant solutions.Key Technical Skills & Experience:Must Have:Databricks Platform Mastery:Deep, hands-on expertise across the Databricks Lakehouse Platform, including Delta Lake, Spark SQL, Databricks compute, and an understanding of Unity Catalog principles. Advanced Spark Development & Optimization:Proven ability to write, review, and significantly optimize complex Apache Spark (Python/Scala) applications for performance, stability, and efficiency in production. Production Data Engineering & Architecture:Strong experience in designing, validating, and troubleshooting production-grade data pipelines and architectures on Databricks, adhering to data modeling and software engineering best practices. CI/CD & DevOps for Databricks:Practical experience implementing and advising on CI/CD practices for Databricks projects (e.g., using Databricks CLI, Repos, dbx, Azure DevOps, GitHub Actions) for automated testing and deployment. Databricks Security & Governance:Solid understanding of Databricks security features, including access control models, secrets management, and network configurations, with the ability to advise on their practical application. Operational Excellence on Databricks:Experience with monitoring, logging, alerting, and performance tuning strategies for Databricks jobs and clusters to ensure operational reliability and efficiency. Machine Learning on Databricks:Experience with ML on Databricks:MLflow, Model Serving, and best practices for securely exposing model endpoints. Problem-Solving & Mentorship:Excellent analytical and troubleshooting skills, with a proven ability to diagnose complex issues and effectively communicate solutions and best practices to technical teams in a supportive, advisory capacity.Good to have:Familiarity with Google Cloud Platform Additional Information:- The candidate should have minimum 15 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Chennai

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design and development of applications.- Act as the primary point of contact for the project team.- Provide guidance and mentorship to junior team members.- Collaborate with stakeholders to gather requirements and define project scope.- Ensure timely delivery of high-quality software solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing and analysis.- Experience with big data technologies like Hadoop and Spark.- Hands-on experience in building scalable data pipelines.- Familiarity with cloud platforms like AWS or Azure. Additional Information:- The candidate should have a minimum of 3 years of experience in PySpark.- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies