Jobs
Interviews

267 Data Pipelines Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

6 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

6+ years of total IT experience in development projects. 4+ years experience in cloud-based solutions. 4+ years in solid hands on snowflake development. Hands on experience in Design, build data pipelines on cloud-based infrastructure having extensively worked on AWS, snowflake, Having done end to end build from ingestion, transformation, and extract generation in Snowflake. Strong hand-on experience in writing complex SQL queries. Good understanding and experience in Azure cloud services. Optimize and tune snowflake performance including query optimization and have experience in scaling strategies. Address data issues, root cause analysis and production support. Experience working in a Financial Industry. Understanding Agile methodologies. Certification on Snowflake and Azure will be added advantage.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consulta nt - Le ad AWS Cloud Engineer! In this role, you will own the vision, architecture, and governance of cloud infrastructure supporting scalable, secure, and high-performing AI/ GenAI platforms across the enterprise. Your mandate includes building resilient, compliant, and cost-efficient cloud ecosystems primarily on AWS, but with a strong foundation for multi-cloud operability Responsibilities Define and maintain cloud infrastructure architecture across AWS accounts, environments, and regions. Architect multi-tenants, secure VPC and networking models, supporting cross-account and hybrid integrations. Standardize Infrastructure-as-Code (Terraform) strategy for AI/ML/ GenAI workloads across teams. Govern security frameworks, including encryption, IAM boundary enforcement, secrets management, and logging. Oversee cloud automation in CI/CD pipelines and support deployment of GenAI workloads (LLM APIs, vector DBs). Design, review and implement disaster recovery, backup, and high availability strategies. Optimize cloud cost and performance with tagging, resource planning, and usage analytics. Define and support multi-cloud readiness, including network peering, SSO/SAML, and logging across clouds. Collaborate with MLOps , Compliance, InfoSec, and Architecture teams to align infrastructure with enterprise goals. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Collaborating with others to source, analyse, test and deploy data processes Qualifications we seek in you! Minimum Qualifications hands-on AWS infrastructure experience in production environments. Experience developing, testing, and deploying data pipelines Clear and effective communication skills to interact with team members, stakeholders and end users Degree/qualification in Computer Science or a related field, or equivalent work experience Knowledge of governance and compliance policies, standards, and procedures Proven ability to manage enterprise-wide IAC, AWS CLI, and Python or Bash scripting, versioning strategy. Expert in IAM, S3, DevOps, VPC, ECS/EKS, Lambda, and serverless computing. Experience supporting AI/ML or GenAI pipelines in AWS (especially for compute and networking). Hands on experience to multi-cloud architecture basics (e.g., SSO, networking, blob exchange, shared VPC setups). Deep understanding of CI/CD automation, AI workload optimization, and infrastructure governance. Hands-on experience designing or managing infrastructure in at least one other cloud (Azure or GCP). Hands on experience to multiple AI / ML /RAG/LLM workloads and model deployment infrastructure. AWS Certified Solutions Architect - Professional or Advanced Networking Specialty. Preferred Qualifications/ Skills Experience deploying infrastructure in both AWS and another major cloud provider (Azure or GCP). Designed or migrated enterprise workloads to multi-cloud or hybrid setups. Experience with cross-cloud monitoring, networking (VPNs, Transit Gateways), and DR policies. Familiarity with multi-cloud tools (e.g., HashiCorp Vault, Kubernetes with cross-cloud clusters). Strong understanding of DevSecOps best practices and compliance requirements. In-depth exposure to regulated industries (BFSI, healthcare) requiring auditability and compliance. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are seeking a skilled Big Data Engineer to design, build, and maintain scalable big data processing pipelines. The ideal candidate will be proficient in Python and PySpark, have hands-on experience with cloud platforms, and be familiar with CI/CD processes to automate deployments and workflows. Responsibilities: Develop and maintain data pipelines using Python and PySpark. Design and implement scalable big data solutions on cloud platforms (AWS, Azure, or GCP). Build and manage CI/CD pipelines for automated deployment and testing. Collaborate with data scientists, analysts, and other engineers to deliver robust data infrastructure. Optimize data processing workflows for performance and reliability. Key Skills: Python PySpark Cloud platforms: AWS / Azure / GCP (any one or more) CI/CD pipelines and tools

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are seeking an experienced AWS Data Engineer to support the on-ground client team in delivering high-quality data engineering solutions. The ideal candidate will have hands-on experience with AWS data services and strong programming skills. Key Responsibilities: Collaborate with the client's onsite team to ensure timely completion of deliverables Design, develop, and maintain data pipelines using AWS services such as Lambda, Glue, and Redshift Write efficient, reusable, and optimized code in Python and SQL for data processing and transformation Monitor, troubleshoot, and optimize data workflows for performance and reliability Implement best practices in data engineering and cloud infrastructure management Participate in code reviews, testing, and documentation activities Required Skills: Proven experience with AWS Data Engineering services: Lambda, Glue, Redshift Strong programming skills in Python and SQL Experience working in collaboration with onsite client teams to meet project goals Good problem-solving and communication skills Knowledge of data pipeline design, ETL/ELT processes, and cloud data management

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

JobOpening Senior Data Engineer (Remote, Contract 6 Months) Remote | Contract Duration: 6 Months | Experience: 6-8 Years We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Key Responsibilities: Develop, design and build scalable and modular software components for quantitative analysis and financial modeling and investment analytics for risk models, and high-volume complex data processing pipelines. Ensure quality & performance of research data pipeline and curated new premium content using different approaches Conduct research on additional content / data which can add value to models and or investment processes Perform & implement complex quantitative calculations with high accuracy and performance Collaborate with modelers, content experts to develop content expertise and implement optimized and performant solutions Apply statistical methods to real-world financial data & climate data to derive business insights Optimize algorithms for time-series data analysis and financial computations Skills & Qualifications: Bachelor or master s level education in Computer Science, Engineering, or a related discipline Minimum 3+ years of experience in Python-based full scale production software development and design Formidable analytical, problem-solving, and production troubleshooting skills Understanding of climate/ESG vendors, climate datasets, and standards A passion for providing fundamental software solutions for highly available, performant full stack applications with a Student of Technology attitude Passion to work in a team-environment, multitasking, and effective communication skills Knowledge of software development methodologies (analysis, design, development, testing) and basic understanding of Agile / Scrum methodology and practices Ability and willingness to learn fast, multi-task, self-motivate and pick up new things easily Ability to work independently and efficiently in a fast-paced and team-oriented environment Good to Have Understanding of Agile work environments, including knowledge of GIT, CI/CD. Knowledge of investment process, climate risk particularly transition risk & decarbonization analytics. Exposure to curate unstructured data using NLP / Gen AI /LLM CFA/FRM preferred

Posted 1 month ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data management tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with partners to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Masters degree with 1 to 3 years of experience in Data Engineering OR Bachelors degree with 1 to 3 years of experience in Data Engineering Must-HaveSkills: Minimum of 1 year of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 1 year of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Experience using cloud platforms (AWS), data lakes, and data warehouses. Working knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling and data anlysis Good-to-Have Skills: Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications: ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft Certified Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification.

Posted 1 month ago

Apply

1.0 - 3.0 years

1 - 3 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data management tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with partners to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Masters degree with 1 to 3 years of experience in Data Engineering OR Bachelors degree with 1 to 3 years of experience in Data Engineering Must-HaveSkills: Minimum of 1 year of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 1 year of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Experience using cloud platforms (AWS), data lakes, and data warehouses. Working knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling and data anlysis

Posted 1 month ago

Apply

1.0 - 3.0 years

1 - 3 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

In this vital role you will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensure data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams and with cross-functional teams to understand business requirements and translate them into technical solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL, Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices Preferred Qualifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experience with SQL/NOSQL databases, vector database for large language models Experience with data modeling and performance tuning for both OLAP and OLTP databases Experience with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Ability to learn quickly, be organized and detail oriented Strong presentation and public speaking skills

Posted 1 month ago

Apply

1.0 - 3.0 years

1 - 3 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

In this vital role we are seeking a Business Systems Analyst with a good background in data and analytics to define and manage product requirements for AI-driven applications. Partner with Data Scientists, ML Engineers, and Product Managers to define business processes, product needs, and AI solution requirements. Capture and document epics, user stories, acceptance criteria, and data process flows for AI-powered analytics applications. Work closely with partners to define scope, priorities, and impact of new AI and data initiatives. Ensure non-functional requirements, such as data security, model interpretability, and system performance, are included in product backlogs. Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and lead backlog grooming sessions. Ensure alignment of technical requirements and UX for AI-based applications and interactive dashboards. Collaborate with engineers to define data ingestion, transformation, and model deployment processes. Develop and implement product demonstrations showcasing AI-driven insights and analytics. Maintain detailed documentation of data pipelines, model lifecycle management, and system integrations. Stay engaged throughout software development, providing proactive feedback to ensure business needs are met. What we expect of you We are all different, yet we all use our unique contributions to serve patients. This role bridges the gap between business needs and technical execution, ensuring the development of high-quality, scalable AI solutions. You will collaborate with data scientists, engineers, and product managers to shape product roadmaps, refine requirements, and drive alignment between business objectives and technical capabilities. Basic Qualifications: Masters degree and 1 to 3 years experience in Computer Science, Data Science, Information Systems, or related field OR Bachelors degree and 3 to 5 years of experience in Computer Science, Data Science, Information Systems, or related field OR Diploma and 7 to 9 years of experience in Computer Science, Data Science, Information Systems, or related field Preferred Qualifications: Experience defining requirements for AI/ML models, data pipelines, or analytics dashboards. Familiarity with cloud platforms (AWS, Azure, GCP) for AI and data applications. Understanding of data security, governance, and compliance in AI solutions. Ability to communicate complex AI concepts and technical constraints to non-technical partners. Knowledge of MLOps, model monitoring, and CI/CD for AI applications.

Posted 1 month ago

Apply

8.0 - 12.0 years

8 - 12 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

To lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications: Doctorate Degree with 35+ years of experience in Computer Science, IT or related field OR Masters Degree with 68+ years of experience in Computer Science, IT or related field OR Bachelors Degree with 1012+ years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 21 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are seeking a skilled Azure Data Engineer with deep expertise in data engineering technologies. The ideal candidate will have a strong foundation in data processing, cloud services, and data pipeline development to support our growing data needs. Key Responsibilities: - Design, implement, and manage scalable data pipelines on Azure, ensuring data integrity and performance. - Utilize Databricks and Snowflake to perform data transformation, processing, and analytics. - Conduct unit testing on developed code to ensure quality and reliability of data solutions. - Collaborate with cross-functional teams in an Agile/Scrum environment to deliver robust data solutions. Qualifications: - **Certifications:** Certification in Databricks (preferred). Technical Skills: - Strong working knowledge in Snowflake (4/5 proficiency). - Competence in performing unit testing for developed solutions (4/5 proficiency). - Understanding of Agile and Scrum methodologies (3/5 proficiency). Preferred Candidate Profile: The ideal candidate will possess strong analytical skills, attention to detail, and the ability to work effectively in a collaborative team environment. If you meet these qualifications and are eager to join a dynamic team, we encourage you to apply!

Posted 1 month ago

Apply

14.0 - 16.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job Category Software Engineering Job Details About Salesforce We're Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too - driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good - you've come to the right place. Role Description Join the AI team at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our innovative new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a domain expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and work with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 14+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills BENEFITS & PERKS Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do The R&D Precision Medicine team is responsible for Data Standardization, Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with access to Amgens wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These data include clinical data, omics, and images. These solutions are pivotal tools in Amgens goal to accelerate the speed of discovery, and speed to market of advanced precision medications. The Data Engineer will be responsible for full stack development of enterprise analytics and data mastering solutions leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that support research cohort-building and advanced AI pipelines. The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions, and be exceptionally skilled with data analysis and profiling. You will collaborate closely with partners, product team members, and related IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a solid background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data management tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with partners to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The professional we seek is someone with these qualifications. Basic Qualifications: Masters degree with 1 to 3 years of experience in Data Engineering OR Bachelors degree with 1 to 3 years of experience in Data Engineering Must-Have Skills: Minimum of 1 year of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 1 year of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Experience using cloud platforms (AWS), data lakes, and data warehouses. Working knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling and data anlysis Good-to-Have Skills: Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications: ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity Highest degree of initiative and self-motivation Strong verbal and written communication skills, including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams, specifically including using of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Chennai, Tamil Nadu, India

On-site

Qualification Design, develop, and deploy machine learning and deep learning modls. Collaborate with data scientists, product managers, and engineers to identify opportunities for AI integration. Build scalable and efficient data pipelines to train and serve AI models. Optimize models for performance, accuracy, and resource efficiency. Monitor and maintain production AI systems, ensuring reliability and performance. Research and prototype new AI techniques and stay current with the latest advancements in AI and ML. Document models, processes, and methodologies clearly and concisely.

Posted 1 month ago

Apply

7.0 - 9.0 years

7 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a highly skilled Senior Data Engineer with strong experience in cloud-based data engineering and AI solutions. The ideal candidate will bring a combination of software engineering, big data processing, and cloud infrastructure expertise. This role requires hands-on experience in building data pipelines on AWS or Azure , with a strong emphasis on Databricks , Spark , and Python . You will work within Agile teams , contributing to the design, development, and deployment of scalable, high-performance data solutions. A passion for quality, testing, and continuous learning is key to success in this role. Required Qualifications Bachelor's degree or equivalent in Computer Science , Computer Engineering , or a related field. 7+ years of experience in data or software engineering roles. 3+ years of experience building cloud-based data pipelines and AI/ML data solutions on AWS or Azure . 3+ years of hands-on experience with Python and Apache Spark . Deep experience with Databricks , including: Spark-based processing Building and managing scalable data pipelines Use of Databricks notebooks and platform tools for analytics and machine learning Key Responsibilities Design, build, and maintain cloud-native data pipelines that support AI/ML models and large-scale analytics workloads. Leverage Databricks and Spark to process structured and unstructured data efficiently. Collaborate with cross-functional teams, including data scientists, product owners, and DevOps engineers, in an Agile environment. Apply testing methodologies and best practices to ensure high-quality, resilient data solutions. Contribute to code reviews , data modeling , and the development of data quality frameworks . Stay current on evolving tools, technologies, and industry trends to continuously improve processes and architecture. Required Skills Strong Python programming and data manipulation skills. Expertise in Apache Spark and large-scale distributed data processing. In-depth experience with Databricks as a development and orchestration environment. Solid understanding of cloud architecture and data services on AWS or Azure . Experience working in Agile teams with modern development and deployment practices. Strong written and verbal communication skills with the ability to convey technical topics clearly. Desirable Skills Databricks certification (Developer or Data Engineer). Experience working within a DevOps delivery model , including CI/CD for data pipelines. Understanding of quality and compliance frameworks , and applying standards in production environments. Prior experience in industries with large, complex data environments , or consulting/vendor roles.

Posted 1 month ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc). Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions

Posted 1 month ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Pune, Maharashtra, India

On-site

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour's. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Chennai, Tamil Nadu, India

On-site

Qualification Talend - Designing, developing, and documenting existing Talend ETL processes, technical architecture, data pipelines, and performance scaling using tools to integrate Talend data and ensure data quality in a big data environment. AWS / Snowflake - Design, develop, and maintain data models using SQL and Snowflake / AWS Redshift-specific features. Collaborate with stakeholders to understand the requirements of the data warehouse. Implement data security, privacy, and compliance measures. Perform data analysis, troubleshoot data issues, and provide technical support to end-users. Develop and maintain data warehouse and ETL processes, ensuring data quality and integrity. Stay current with new AWS/Snowflake services and features and recommend improvements to existing architecture. Design and implement scalable, secure, and cost-effective cloud solutions using AWS / Snowflake services. Collaborate with cross-functional teams to understand requirements and provide technical guidance.

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Chennai, Tamil Nadu, India

On-site

Qualification Design, develop, and deploy machine learning and deep learning modls. Collaborate with data scientists, product managers, and engineers to identify opportunities for AI integration. Build scalable and efficient data pipelines to train and serve AI models. Optimize models for performance, accuracy, and resource efficiency. Monitor and maintain production AI systems, ensuring reliability and performance. Research and prototype new AI techniques and stay current with the latest advancements in AI and ML. Document models, processes, and methodologies clearly and concisely.

Posted 1 month ago

Apply

3.0 - 8.0 years

9 - 16 Lacs

Pune

Work from Office

We are looking for a skilled Azure Data Engineer to design, develop, optimize data pipelines for following 1, SQL+ETL+AZURE+Python+Pyspark+Databricks 2, SQL+ADF+ Azure 3, SQL+Python+Pyspark - Strong proficiency in SQL for data manipulation querying Required Candidate profile - Python and PySpark for data engineering tasks. - Exp with Databricks for big data processing analytics. - Knowledge of data modeling, warehousing, governance. - CI/CD pipelines for data deployment. Perks and benefits Perks and Benefits

Posted 1 month ago

Apply

0.0 - 5.0 years

0 - 5 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients hybrid-cloud and AI journeys. You'll have the opportunity to work with the latest technologies, ensuring the applications delivered are high performing, highly available, responsive, and maintainable. Your primary responsibilities include: Analytical Problem-Solving and Solution Enhancement:Analyze, validate and propose improvements to existing failures, with the support of the architect and technical leader. Comprehensive Engagement Across Process Phases:Involvement in every step of the process, from design, development, testing release changes and troubleshoot where necessary, providing a great customer service. Strategic Stakeholder Engagement and Innovative Coding Solutions:Drive key discussions with your stakeholders and analyze the current landscape for opportunities to operate and code creative solutions. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Technical expertise in Java development projects Understanding and experience in Java coding using various frameworks and design patterns. Knowledge on data pipelines. Developing data bridge pipelines using replicator framework. Writing Junit testcases for the pipelines Preferred technical and professional experience Experience in data analytics. Working knowledge on Plx framework and tools. Knowledge on workday integrations with external systems and Experience in working on Google Cloud Platform

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job Description: A Pricing Revenue Growth Consultant advise to build a pricing and promotion tool for CPG client which would have pricing strategies trade promotions and revenue growth initiatives This will involve building analytics and machine learning models for price elasticity promotion effectiveness and trade promotion optimization Consultant would require collaborating with CPG business marketing data scientist and other teams for a successful delivery of project and tool Required Skills Business Domain Skills o Trade Promotion Management TPM Trade Promotion Optimization TPO Trade Spend Efficiency o Promotion Depth Frequency Forecasting Scenario and Effectiveness o Price Pack Architecture Pricing Elasticity and Price Sensitivity o Competitive Price tracking benchmarking and effectiveness o Revenue Growth Management Category and Channel Growth Market Entry and New Product Pricing o Financial Modeling Dynamic pricing implementation o AI Machine Learning for Pricing Typical Work Environment Own delivery of RGM solutions ensuring high quality deliverables and measurable business outcomes for clients Collaborative work with cross functional teams across sales marketing and product development Stakeholder Management Team Handling Fast paced environment with a focus on delivering timely insights to support business decisions Excellent problem solving skills and ability to address complex technical challenges Effective communication skills to collaborate with cross functional teams and stakeholders Potential to work on multiple projects simultaneously prioritizing tasks based on business impact Experience and Qualification Overall 8 to 10 years of experience in Retail CPG Analytics with 2 to 4 years of experience in RGM products services within the Retail CPG industry Expertise in Pricing Promotions Price Pack Architecture Trade Spend etc Experience in engaging senior business stakeholders VP C level in strategic discussions Good understanding of application of AI in building and delivering RGM solutions and underlying technology Understanding and experience working in Agile methodologies Degree in Data Science Computer Science with data science specialization Master s in business administration in Marketing and Analytics preferred Key Responsibilities: A day in the life of an Infoscion As part of the Infosys consulting team your primary role would be to lead the engagement effort of providing high quality and value adding consulting solutions to customers at different stages from problem definition to diagnosis to solution design development and deployment You will review the proposals prepared by consultants provide guidance and analyze the solutions defined for the client business problems to identify any potential risks and issues You will identify change Management requirements and propose a structured approach to client for managing the change using multiple communication mechanisms You will also coach and create a vision for the team provide subject matter training for your focus areas motivate and inspire team members through effective and timely feedback and recognition for high performance You would be a key contributor in unit level and organizational initiatives with an objective of providing high quality value adding consulting solutions to customers adhering to the guidelines and processes of the organization If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology Oracle Industry Solutions Retail Merchandise Technical Skills o Proficiency in programming languages like Python and R for data manipulation and analysis o Expertise in machine learning algorithms and statistical modeling techniques o Familiarity with data warehousing and data pipelines o Experience with data visualization tools like Tableau or Power BI o Experience in Cloud platforms e g ADF Data bricks Azure and their AI services Consulting Skills o Hypothesis driven problem solving o Go to market pricing and revenue growth execution o Advisory Presentation Data Storytelling o Project Leadership and Execution Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen strategy and cross industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology->Oracle Industry Solutions->Retail Merchandise->Oracle-Retail Price Management (ORPM)

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

IBM Consulting Overview A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role and Responsibilities As a Software Developer, you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients hybrid-cloud and AI journeys. You'll have the opportunity to work with the latest technologies, ensuring the applications delivered are high-performing, highly available, responsive, and maintainable. Your primary responsibilities include: Analytical Problem-Solving and Solution Enhancement: Analyze, validate and propose improvements to existing failures, with the support of the architect and technical leader. Comprehensive Engagement Across Process Phases: Involvement in every step of the process, from design, development, testing release changes and troubleshooting where necessary, providing great customer service. Strategic Stakeholder Engagement and Innovative Coding Solutions: Drive key discussions with stakeholders and analyze the current landscape for opportunities to operate and code creative solutions. Required Education Bachelor's Degree Preferred Education Master's Degree Required Technical and Professional Expertise Technical expertise in Java development projects Understanding and experience in Java coding using various frameworks and design patterns Knowledge of data pipelines Developing data bridge pipelines using the replicator framework Writing JUnit test cases for pipelines Preferred Technical and Professional Experience Experience in data analytics Working knowledge of Plx framework and tools Knowledge of Workday integrations with external systems Experience in working on Google Cloud Platform

Posted 1 month ago

Apply

3.0 - 6.0 years

2 - 6 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities Establish and implement best practices for DBT workflows, ensuring efficiency, reliability, and maintainability. Collaborate with data analysts, engineers, and business teams to align data transformations with business needs. Monitor and troubleshoot data pipelines to ensure accuracy and performance. Work with Azure-based cloud technologies to support data storage, transformation, and processing Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong MS SQL, Azure Databricks experience Implement and manage data models in DBT, data transformation and alignment with business requirements. Ingest raw, unstructured data into structured datasets to cloud object store. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance Preferred technical and professional experience Establish best DBT processes to improve performance, scalability, and reliability. Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Databricks Proven interpersonal skills while contributing to team effort by accomplishing related results as required

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies