Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Python (Programming Language) Good to have skills : Microsoft Power Business Intelligence (BI), Google BigQuery, Apache AirflowMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead data platform blueprint and design- Implement data platform components- Ensure seamless integration between systems and data models Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language)- Good To Have Skills: Experience with Apache Airflow, Google BigQuery, Microsoft Power Business Intelligence (BI)- Strong understanding of data engineering principles- Experience in building scalable data platforms- Proficient in data modeling and database design Additional Information:- The candidate should have a minimum of 7.5 years of experience in Python (Programming Language)- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education
Posted 3 weeks ago
3.0 - 7.0 years
20 - 27 Lacs
Gurugram
Work from Office
The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Experience working on any Databricks would be added advantage Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Exposure to various ETL and Business Intelligence tools Experience in shell scripting to automate pipeline execution. Solid grounding in Agile methodologies Experience with git and other source control systems Strong communication and presentation skills Nice-to-have skills Certification in Hadoop/Big Data Hortonworks/Cloudera Databricks Spark certification Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Qualifications Tech./M.Tech./MS or BCA/MCA degree from a reputed university
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Key Responsibilities- Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. -Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. -Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience:- Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.-Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. -Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis- Minimum 4 year of Experience in one or more programming languages Python, Java, Scala.- Experience using airflow for the data pipelines in min 1 project.-1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes:- Ready to work in B Shift (12 PM to 10 PM) - A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.- Good critical thinking and problem-solving abilities - Health care knowledge - Good Communication Skills ducational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information:Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Microsoft Azure Databricks, PySparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and streamline processes. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of new applications- Conduct code reviews and ensure coding standards are met- Stay updated on industry trends and best practices Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform- Good To Have Skills: Experience with PySpark- Strong understanding of data engineering concepts- Experience in building and optimizing data pipelines- Knowledge of cloud platforms like Microsoft Azure- Familiarity with data governance and security practices Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : PySparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Key ResponsibilitiesWork on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. Build data pipelines & applications to stream and process datasets at low latencies. Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience:Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. Minimum of 2 years of experience years in real time streaming using Kafka/KinesisMinimum 4 year of Experience in one or more programming languages Python, Java, Scala.Experience using airflow for the data pipelines in min 1 project.1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes:Ready to work in B Shift (12 PM 10 PM) A Client facing skills:solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.Good critical thinking and problem-solving abilities Health care knowledge Good Communication Skills Educational Qualification:Bachelor of Engineering / Bachelor of Technology Additional Information:Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
2.0 - 4.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Analytics Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education:**Position Summary :**The Data Analyst will focus on collecting, cleaning, and analyzing data to support business decisions.**Key Responsibilities:**- Gather, process, and analyze data to identify trends and insights.- Develop dashboards and reports to communicate findings.- Collaborate with stakeholders to understand data needs.- Ensure data accuracy and quality in all analyses.- Prepare and clean datasets for analysis to ensure accuracy and usability.- Generate reports and dashboards to communicate key performance metrics.- Support data-driven decision-making by identifying actionable insights.- Monitor data pipelines and troubleshoot issues to ensure smooth operation.- Collaborate with cross-functional teams to understand and meet data needs.** Qualifications:**- Bachelor's degree in a relevant field (e.g., Data Science, Statistics, Computer Science).- 2-4 years of experience in data analytics.- Proficiency in tools like Power BI, Tableau, and SQL.- Strong analytical and problem-solving skills.- Effective communication and teamwork abilities. Additional Information:- The candidate should have minimum 5 years of experience in Data Analytics.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production-ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the implementation of AI/ML models.- Conduct research on emerging AI technologies.- Optimize AI algorithms for performance and scalability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Scala Good to have skills : NoSQLMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Scala Developer, you will be responsible for designing, building, and maintaining robust, scalable, and high-performance applications using the Scala programming language. You will work on implementing functional programming paradigms, integrating with distributed systems, and building APIs and backend services. Your role will involve active collaboration with cross-functional teams to understand business needs and transform them into reliable software solutions that align with the organizations technical goals and standards. Roles & Responsibilities:Expected to perform independently and grow into a subject matter expert (SME) in Scala and functional programming.Actively contribute to technical discussions, design reviews, and code reviews within the team.Design, develop, test, and maintain backend services and APIs using Scala.Collaborate with architects and other developers to implement efficient, scalable, and reliable systems.Troubleshoot, debug, and optimize application performance.Ensure adherence to best practices in software development, testing, and deployment. Professional & Technical Skills: Must To Have Skills: Proficiency in Scala and functional programming concepts.Hands-on experience with Scala frameworks such as Akka, Play Framework, Cats, or ZIO.Experience building and integrating RESTful APIs or working with GraphQL.Exposure to event-driven architectures and messaging systems like Kafka.Proficient in working with relational and non-relational databases (e.g., PostgreSQL, NoSQL).Experience integrating Apache Spark with Scala for large-scale data processing.Strong understanding of concurrency, asynchronous programming, and distributed systems.Familiarity with CI/CD tools and cloud environments, preferably AWS.Working knowledge of unit testing, integration testing, and agile development methodologies like TDD or BDD. Additional Information:The candidate should have a minimum of 23 years of experience in Scala development.This position is based at our Bengaluru office.A minimum of 15 years of full-time education is required.Strong communication and team collaboration skills are essential. Qualification 15 years full time education
Posted 3 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Python (Programming Language) Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL)Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and maintain data pipelines.- Ensure data quality throughout the data management process. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 3 years of experience in Python (Programming Language).- This position is based at our Gurugram office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Python (Programming Language) Good to have skills : Google BigQuery, Google Cloud Platform ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on data solutions and collaborating with teams to optimize data processes. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Develop and maintain data pipelines.- Ensure data quality and integrity.- Implement ETL processes for data migration and deployment. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with Google Cloud Platform Architecture.- Strong understanding of data engineering concepts.- Experience in designing and implementing data solutions.- Knowledge of ETL processes and data migration techniques. Additional Information:- The candidate should have a minimum of 5 years of experience in Python (Programming Language).- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 3 weeks ago
6.0 - 8.0 years
7 - 17 Lacs
Hyderabad
Work from Office
Lead Analyst/Senior Software Engineer - Data Engineer with Python, Apache Spark, HDFS Job Overview : CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and Spark. Ingest, process, and transform large datasets from various sources into usable formats. Manage and optimize data storage using HDFS and MongoDB. Ensure high availability and performance of data infrastructure. Implement data quality checks, validations, and monitoring processes. Collaborate with cross-functional teams to understand data needs and deliver solutions. Write reusable and maintainable code with strong documentation practices. Optimize performance of data workflows and troubleshoot bottlenecks. Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role: Minimum 6 years of experience as a Data Engineer or similar role. Strong proficiency in Python for data manipulation and pipeline development. Hands-on experience with Apache Spark for large-scale data processing. Experience with HDFS and distributed data storage systems. Strong understanding of data architecture, data modeling, and performance tuning. Familiarity with version control tools like Git. Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. Knowledge of cloud services (AWS, GCP, or Azure) is preferred. Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: Experience with containerization (Docker, Kubernetes). Knowledge of real-time data streaming tools like Kafka. Familiarity with data visualization tools (e.g., Power BI, Tableau). Exposure to Agile/Scrum methodologies. Skills: Hadoop Hive Python SQL English Note This role will require- 8 weeks of in-office work after joining, after which we will transition to a hybrid working model, with 2 days per week in the office. Mode of interview F2F Time : Registration Window -9am to 12.30 pm. Candidates who are shortlisted will be required to stay throughout the day for subsequent rounds of interviews Notice Period: 0-45 Days
Posted 3 weeks ago
4.0 - 9.0 years
9 - 14 Lacs
Pune
Work from Office
Bachelor's or master’s degree in computer science, AI/ML, Data Science, or related field. 5+ years of experience in AI/ML solution design and development. Proficiency in AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn, etc.). Strong programming skills in Python, R, or Java. Hands-on experience with cloud platforms (AWS, Azure, GCP). Solid understanding of data engineering concepts and tools (Spark, Hadoop, Kafka, etc.). Experience with MLOps practices, including CI/CD for AI models. Strong problem-solving, communication, and leadership skills. Preferred Qualifications AI/ML certifications (AWS Certified Machine Learning, Google Professional ML Engineer, etc.) Experience in natural language processing (NLP) or computer vision. Knowledge of AI governance and ethical AI practices. Familiarity with AI model explainability and bias detection.
Posted 3 weeks ago
5.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job TitleSenior Data Engineer/Developer Number of Positions2 : The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experience as a Senior Data Engineer or similar role. Experience with big data toolsHadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go lang etc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.
Posted 3 weeks ago
6.0 - 8.0 years
40 - 45 Lacs
Pune
Work from Office
: Job Title - Data Platform Engineer - Tech Lead Location - Pune, India Role Description DB Technology is a global team of tech specialists, spread across multiple trading hubs and tech centers. We have a strong focus on promoting technical excellence our engineers work at the forefront of financial services innovation using cutting-edge technologies. DB Pune location plays a prominent role in our global network of tech centers, it is well recognized for its engineering culture and strong drive to innovate. We are committed to building a diverse workforce and to creating excellent opportunities for talented engineers and technologists. Our tech teams and business units use agile ways of working to create best solutions for the financial markets. CB Data Services and Data Platform We are seeking an experienced Software Engineer with strong leadership skills to join our dynamic tech team. In this role, you will lead a group of engineers working on cutting-edge technologies in Hadoop, Big Data, GCP, Terraform, Big Query, Data Proc and data management. You will be responsible for overseeing the development of robust data pipelines, ensuring data quality, and implementing efficient data management solutions. Your leadership will be critical in driving innovation, ensuring high standards in data infrastructure, and mentoring team members. Your responsibilities will include working closely with data engineers, analysts, cross-functional teams, and other stakeholders to ensure that our data platform meets the needs of our organization and supports our data-driven initiatives. Join us in building and scaling our tech solutions including hybrid data platform to unlock new insights and drive business growth. If you are passionate about data engineering, we want to hear from you! Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel.You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Technical Leadership: Lead a cross-functional team of engineers in the design, development, and implementation of on prem and cloud-based data solutions. Provide hands-on technical guidance and mentorship to team members, fostering a culture of continuous learning and improvement. Collaborate with product management and stakeholders to define technical requirements and establish delivery priorities. . Architectural and Design Capabilities: Architect and implement scalable, efficient, and reliable data management solutions to support complex data workflows and analytics. Evaluate and recommend tools, technologies, and best practices to enhance the data platform. Drive the adoption of microservices, containerization, and serverless architectures within the team. Quality Assurance: Establish and enforce best practices in coding, testing, and deployment to maintain high-quality code standards. Oversee code reviews and provide constructive feedback to promote code quality and team growth. Your skills and experience Technical Skills: Bachelor's or Masters degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with a focus on Big Data and GCP technologies such as Hadoop, PySpark, Terraform, BigQuery, DataProc and data management. Proven experience in leading software engineering teams, with a focus on mentorship, guidance, and team growth. Strong expertise in designing and implementing data pipelines, including ETL processes and real-time data processing. Hands-on experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, Pig, and Spark. Hands on experience with cloud platform particularly Google Cloud Platform (GCP), and its data management services (e.g., Terraform, BigQuery, Cloud Dataflow, Cloud Dataproc, Cloud Storage). Solid understanding of data quality management and best practices for ensuring data integrity. Familiarity with containerization and orchestration tools such as Docker and Kubernetes is a plus. Excellent problem-solving skills and the ability to troubleshoot complex systems. Strong communication skills and the ability to collaborate with both technical and non-technical stakeholders Leadership Abilities: Proven experience in leading technical teams, with a track record of delivering complex projects on time and within scope. Ability to inspire and motivate team members, promoting a collaborative and innovative work environment. Strong problem-solving skills and the ability to make data-driven decisions under pressure. Excellent communication and collaboration skills. Proactive mindset, attention to details, and constant desire to improve and innovate. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 weeks ago
6.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
: Job TitleStrategic Data Archive Onboarding Engineer, AS LocationPune, India Role Description Strategic Data Archive is an internal service which enables application to implement records management for regulatory requirements, application decommissioning, and application optimization. You will work closely with other teams providing hands on support onboarding by helping them define record content and metadata, configuring archiving, supporting testing and creating defensible documentation that archiving was complete. You will need to both support and manage the expectations of demanding internal clients. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Provide responsive customer service helping internal clients understand and efficiently manage their records management risks Explain our archiving services (both the business value and technical implementation) and respond promptly to inquiries Support the documentation and approval of requirements including record content and metadata Identify and facilitate implementing an efficient solution to meet the requirements Manage expectations and provide regular updates- frequently to senior stakeholders Configure archiving in test environments- will not be coding new functionality but will be making configuration changes maintained in a code repository and deployed with standard tools Support testing ensuring clients have appropriately managed implementation risks Help issue resolution including data issues, environment challenges, and code bugs Promote configurations from test environments to production Work with Production Support to ensure archiving is completed and evidenced Contribute towards a culture of learning and continuous improvement Will partner with teams in multiple location Your skills and experience Delivers against tight deadlines in a fast paced environment Manages others expectations and meets commitments High degree of accuracy and attention to detail Ability to communicate (written and verbal) concisely both business concepts and technical details and to influence partners including senior mangers High analytical capabilities and able to quickly grasp new contexts we support multiple areas of the Bank Expresses opinions while supporting group decisions Ensures deliverables are clearly documented and holds self and others accountable for meeting those deliverables Ability to identify risks at an early stage and implement mitigating strategies Flexibility and willingness to work autonomously and collaboratively Ability to work in virtual teams, agile environment and in matrixed organizations Treats everyone with respect and embraces diversity Bachelors Degree from an accredited college or university desirable Minimum 4 years experience implementing IT solutions in a global financial institution Comfortable with technology (e.g., SQL, FTP, XML, JSON) and a desire and ability to learn new skills as required (e.g., Fabric, Kubernetes, Kafka, Avro, Ansible) Must be an expert in SQL and have Python programming experience. Financial markets and Google Cloud Platform knowledge a plus while curiosity a requirement How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 weeks ago
1.0 - 5.0 years
3 - 7 Lacs
Nagpur
Work from Office
Key Skills: We are looking for an Flutter App Developer who possesses a passion for pushing mobile technologies to the limits. This Flutter app developer will work with our team of talented engineers to design and build the next generation of our mobile applications. Job Description Experience using Web Services, REST API's and Data parsing using XML, JSON etc Collaborate with cross-functional teams to define, design, and ship new features Unit-test code for robustness, including edge cases, usability, and general reliability Work on bug fixing and improving application performance Continuously discover, evaluate, and implement new technologies to maximize development efficiency Can Work Individually Required Experience, Skills and Qualifications
Posted 3 weeks ago
5.0 - 6.0 years
10 - 15 Lacs
Chennai, Bengaluru
Work from Office
AI/ML, AWS-based solutions. Amazon SageMaker, Python and ML libraries, data engineering on AWS, AI/ML algorithms &model deployment strategies.CI/CD, Cloud Formation, Terraform). AWS Certified Machine Learning. generative AI, real-time inference &edge
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Back Senior Java Cloud Developer Role Experience as a Java Developer with expertise in Apache Beam framework (highest priority) Strong knowledge of Data Engineering concepts Familiarity with Databases and Cloud architecture Experience in CI/CD pipeline management Additional Information Job Type Full Time Work ProfileHybrid Years of Experience10+ Years LocationBangalore What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options Professional development and training opportunities A supportive and inclusive work environment
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Back At BCE Global Tech, immerse yourself in exciting projects that are shaping the future of both consumer and enterprise telecommunications This involves building innovative mobile apps to enhance user experiences and enable seamless connectivity on-the-go Thrive in diverse roles like Full Stack Developer, Backend Developer, UI/UX Designer, DevOps Engineer, Cloud Engineer, Data Science Engineer, and Scrum Master; at a workplace that encourages you to freely share your bold and different ideas If you are passionate about technology and eager to make a difference, we want to hear from you! Apply now to join our dynamic team in Bengaluru ETL Data Stage Specialist Join our dynamic team as an ETL Data Stage Specialist In this role, you'll design, develop, and maintain robust ETL processes to ensure seamless data integration and transformation Your expertise will drive data quality, performance optimization, and innovation within our data infrastructure Be a key player in delivering accurate, timely, and valuable insights to support informed business decisions 5+ years experience as a ETL Developer using ETL tools 5+ years experience working with relational database 3 + exposure to any BI tools ( Power BI, microsurgery ) 3+ years experience working with high volume data ingestion Exposure to fourth generation programing language such as python Capable of working as an individual contributor and as part of an agile team member Motivated individual to drive ETL best practices Required Skills ETL Tools IBM DataStage an asset 4+ Yrs Good knowledge of relational database 4+ Yrs Knowldedge of public cloud 2+ Yrs Knowledge of BI tools 3+ Yrs Knowldege of 4gl programing languages 1+ Yrs Education Background Computer Science or Engineering Degree/Diploma or Equivalent Experience Working Timing 8:30 AM 17:00 PM EST What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options Professional development and training opportunities A supportive and inclusive work environment
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Back Key Responsibilities Good understanding of data engineering concepts Working experience on Java language Working experience on Apache Beam framework(Using Java language) Working experience on : Azure (ADF) or Google Cloud Platform(GCP) but preferred is GCP GCP Dataflow(using Java language) GCP BigQuery GCP Workflows GCP Cloud Run(using Java language) GCP Cloud Storage GCP Cloud Functions Infrastructure as Code(IaC) Terraform Required Qualifications To Be Successful In This Role Working experience on GitLab and GitLab CICD Working experience on Maven build tool Working experience on Docker Containers Working experience on Oracle Database Good understanding of Shell Scripting Good understanding of Redis Good Communication Skills Good Understanding of Agile methodologies Additional Information Job Type Full Time Work ProfileHybrid (Work from Office) Years of Experience6-10 Years LocationBangalore Benefits What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options Professional development and training opportunities A supportive and inclusive work environment
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Back About UsBCE Global Tech is a dynamic and innovative company dedicated to pushing the boundaries of technology We are on a mission to modernize global connectivity, one connection at a time Our goal is to build the highway to the future of communications, media, and entertainment, emerging as a powerhouse within the technology landscape in India We bring ambitions to life through design thinking that bridges the gaps between people, devices, and beyond, fostering unprecedented customer satisfaction through technology At BCE Global Tech, we are guided by our core values of innovation, customer-centricity, and a commitment to progress We harness cutting-edge technology to provide business outcomes with positive societal impact Our team of thought-leaders is pioneering advancements in 5G, MEC, IoT, and cloud-native architecture We offer continuous learning opportunities, innovative projects, and a collaborative work environment that empowers our employees to grow and succeed Responsibilities Lead the migration of data pipelines from Hadoop to Google Cloud Platform (GCP) Design, develop, and maintain data workflows using Airflow and custom flow solutions Implement infrastructure as code using Terraform Develop and optimize data processing applications using Java Spark or Python Spark Utilize Cloud Run and Cloud Functions for serverless computing Manage containerized applications using Docker Understand and enhance existing Hadoop pipelines Write and execute unit tests to ensure code quality Deploy data engineering solutions in production environments Craft and optimize SQL queries for data manipulation and analysis Requirements 7-8 years of experience in data engineering or related fields Proven experience with GCP migration from Hadoop pipelines Proficiency in Airflow and custom flow solutions Strong knowledge of Terraform for infrastructure management Expertise in Java Spark or Python Spark Experience With Cloud Run And Cloud Functions Experience with Data flow, DateProc and Cloud monitoring tools in GCP Familiarity with Docker for container management Solid understanding of Hadoop pipelines Ability to write and execute unit tests Experience with deployments in production environments Strong SQL query skills Skills Excellent teamwork and collaboration abilities Quick learner with a proactive attitude Strong problem-solving skills and attention to detail Ability to work independently and as part of a team Effective communication skills Why Join Us Opportunity to work with cutting-edge technologies Collaborative and supportive work environment Competitive salary and benefits Career growth and development opportunities
Posted 3 weeks ago
1.0 - 4.0 years
5 - 9 Lacs
Bengaluru
Work from Office
: Develop/enhance data warehousing functionality including the use and management of Snowflake data warehouse and the surrounding entitlements, pipelines and monitoring, in partnership with Data Analysts and Architects with guidance from lead Data Engineer. About the Role In this opportunity as Data Engineer, you will: Develop/enhance data warehousing functionality including the use and management of Snowflake data warehouse and the surrounding entitlements, pipelines and monitoring, in partnership with Data Analysts and Architects with guidance from lead Data Engineer Innovate with new approaches to meeting data management requirements Effectively communicate and liaise with other data management teams embedded across the organization and data consumers in data science and business analytics teams. Analyze existing data pipelines and assist in enhancing and re-engineering the pipelines as per business requirements. Bachelor’s degree or equivalent required, Computer Science or related technical degree preferred About You You’re a fit for the role if your background includes: Mandatory skills Data Warehousing, data models, data processing[ Good to have], SQL, Power BI / Tableau, Snowflake [good to have] , Python 3.5 + years of relevant experience in Implementation of data warehouse and data management of data technologies for large scale organizations Experience in building and maintaining optimized and highly available data pipelines that facilitate deeper analysis and reporting Worked on Analyzing data pipelines Knowledgeable about Data Warehousing, including data models and data processing Broad understanding of the technologies used to build and operate data and analytic systems Excellent critical thinking, communication, presentation, documentation, troubleshooting and collaborative problem-solving skills Beginner to intermediate Knowledge of AWS, Snowflake, Python Hands-on experience with programming and scripting languages Knowledge of and hands on experience with Data Vault 2.0 is a plus Also have experience in and comfort with some of the following skills/concepts: Good in writing SQL and performance tuning Data Integration tools lie DBT, Informatica, etc. Intermediate in programming language like Python/PySpark/Java/JavaScript AWS services and management, including Serverless, Container, Queueing and Monitoring services Consuming and building APIs. #LI-SM1 What’s in it For You Hybrid Work Model We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Role Description: Lets do this. Lets change the world . We are looking for highly motivated expert Data Engineer who can own the design & develop ment of complex data pipelines , solutions and frameworks , with deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma . The ideal candidate will be responsible to design , develo p , and optimiz e data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. Th is role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management . Roles & Responsibilities: Design, develop, and maintain complexETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDBetc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implementnew tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Deep domain knowledge of Manufacturing and/or Process Development and/or Supply Chain in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks,PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelors degree and 5 to 8+ years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 3 weeks ago
3.0 - 8.0 years
7 - 11 Lacs
Chennai
Work from Office
The Corporate Data team at Zuora is responsible for building a “data mindset” culture within the company. We aim to maximize the value of our data to improve the customer experience and boost company growth and employee productivity. Our team includes data engineers, data scientists, and machine learning engineers working closely to design and build complete solutions. As a Machine Learning Engineer on the team, you’ll have the opportunity to: Design and implement ML and AI-powered products that leverage the wealth of mission-critical subscription, billing, and payment data our customers manage on the Zuora platform. Evaluate and improve model accuracy, scalability, and performance using appropriate metrics at the industry scale. Research and experiment with cutting-edge ML techniques and optimize model performance to enhance existing and new “smart” products. Deepen your knowledge of the customer lifecycle at a B2B company and contribute directly to maximizing revenue and customer retention. Collaborate with product teams, data engineers, and customer success teams to create end-to-end solutions that scale in production. What you’ll do Improvement of Current Systems: Gather metrics on the performance and effectiveness of the current product. Continuously fine-tune & retrain models to ensure the systems remain efficient. New Product Feature Development: Proactively identify opportunities to enhance our product offerings with machine learning and AI-driven features. Lead the design and development of innovative solutions that align with market demands and customer needs. Best Practices and Governance: Establish and document best practices for training, evaluating, and operating machine learning and AI models. Promote a culture of responsible AI and ensure compliance with data privacy regulations. Your experience and skills Bachelor’s or Master’s in Computer Science, Statistics, Machine Learning, or a related field. 3+ years of relevant technical experience as a Data Scientist, Machine Learning Engineer, or Sofware Engineer Exposure to relevant business domains, including sales, marketing, subscription management, accounts receivable, payment processing, and product analytics. Proficiency in python and common machine learning frameworks such as scikit-learn Solid understanding of software development life cycle and software engineering principles Experience with one or more subfields – sentiment analysis, predictive modeling, etc. Understanding of the machine learning pipeline for deployment and production Excellent communication and teamwork skills to collaborate with cross-functional teams. A passion for staying updated with the latest advancements in data science and AI. As part of our commitment to building an inclusive, high-performance culture where ZEOs feel inspired, connected, and valued, we support ZEOs with: Competitive compensation, variable bonus and performance reward opportunities, and retirement programs Medical insurance Generous, flexible time off Paid holidays, “wellness” days, and company-wide end-of-year break 6 months of fully paid parental leave Learning & Development stipend Opportunities to volunteer and give back, including charitable donation match Free resources and support for your mental wellbeing
Posted 3 weeks ago
6.0 - 7.0 years
15 - 22 Lacs
Pune, Bengaluru
Work from Office
Role: Data Engineer Experience: 6-7 Years Location: Bangalore, Pune Notice Period: Immediate Joiners Only Role & responsibilities : Mandate Skills Strong - Pyspark (Programming) Databricks Background and experience: Bachelors Degree in Computer Science or related field 6-7 years of experience in Cloud Data Engineering. Technical and professional skills: We are looking for a flexible, fast learning, technically strong Data Engineer. Expertise is required in the following fields: Proficient in Cloud Services Azure Build ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transforma tions. Hands-on Experience in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Reach us:If you are interested in this position and meet the above qualifications, please reach out to me directly at swati@cielhr.com and share your updated resume highlighting your relevant experience.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France