Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
7 - 17 Lacs
Bengaluru
Work from Office
About this role: Wells Fargo is seeking a Senior Lead business execution consultant In this role, you will: Act as a Business Execution advisor to leadership to drive performance and initiatives, and develop and implement information delivery or presentations to key stakeholders and senior management Lead the strategy and resolution of highly complex and unique challenges related to Business Execution that require solid analytical skills, extensive knowledge of Business Execution, and understanding of business, delivering longer term and large scale solutions Provide vision, direction, and expertise to senior leadership for implementing innovative and significant business solutions that are large scale and cross organizational Lead team meetings or steering committee to facilitate decision making and support implementation of recommendations and plans Strategically engage with all levels of professionals and managers across multiple lines of businesses and serve as an experienced advisor to the leadership Provide direction to a cross functional team using business expertise Required Qualifications: 7+ years of Business Execution, Implementation, or Strategic Planning experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Business Administration, or a related field. Proven experience in innovation roles within the banking or financial services industry, preferably within a global capability center (GCC) or captive unit. Strong understanding of banking processes and the financial services industry. Demonstrated expertise in emerging technologies, including generative AI, agentic AI, Data Engineering, Data Mining & Visualization, machine learning, NLP, and intelligent automation. Exceptional research and analytical skills, with the ability to translate complex data into actionable insights. Excellent project management skills, with a track record of successfully leading innovation projects. Outstanding communication and presentation skills, with experience in content creation and delivering innovation roadshows. Ability to work collaboratively in a cross-functional team environment and manage relationships with diverse stakeholders. Job Expectations: We are seeking a dynamic and forward-thinking Innovation and Incubation specialist with hands on experience to join our team. In this role, you will be instrumental in identifying and researching emerging technologies, developing innovative solutions, and leading industry research & Market study to identify platforms which help in digital transformation journey within bank. Your expertise in generative AI, intelligent automation, and market analysis will be crucial in shaping the future of our banking operations. Emerging Technology Research: Stay abreast of the latest technological trends, with a focus on generative AI, agentic AI, fintech, Data Engineering, Data Mining & Visualization, machine learning, NLP/NLG, and intelligent automation, to identify opportunities for innovation within banking processes. Solution and Platform Identification: Evaluate and recommend new technologies, platforms, and solutions that align with the banks strategic objectives and have the potential to enhance operational efficiency and customer experience. Market and Industry Analysis: Conduct comprehensive market research and industry analysis to understand competitive landscapes, identify market opportunities, and inform strategic decision-making. Process Understanding and Improvement: Analyze existing banking processes to identify areas for improvement and develop innovative solutions that streamline operations and enhance service delivery. Innovation Roadshows and Content Creation: Develop and deliver engaging presentations, newsletters, and other content to communicate innovation initiatives and foster a culture of innovation within the organization. Proof of Concept Development: Lead the development and execution of proof-of-concept projects to validate the feasibility and value of new technologies and solutions. Project Management: Oversee innovation projects from inception to completion, ensuring timely delivery, effective communication, and alignment with strategic goals.
Posted 3 weeks ago
9.0 - 14.0 years
35 - 65 Lacs
Bengaluru
Hybrid
Software ARCHITECT - PRODUCT MNC - Urgent Hiring!!! Are you passionate about crafting scalable, cloud-native architectures? Do you thrive at the intersection of AI, cloud, and data platforms? Join us at Integra Connect, where we are transforming healthcare technology through intelligent solutions and innovation. LOCATION : BANGLORE - HYBRID What You'll Do: Architect end-to-end systems using Azure, Snowflake, Python, and .NET Lead AI/ML architecture and drive responsible AI practices Design scalable data platforms & ETL pipelines Mentor cross-functional engineering teams Set best practices for performance, security, and cloud optimization What Were Looking For: 8+ years in software development, 3+ years in architecture Deep expertise in Azure DevOps, AKS, Snowflake, AI/ML, and .NET/C# Experience in cloud-native architectures and healthcare systems is a plus Strong leadership, strategic thinking, and problem-solving skills At Integra Connect, you’ll be part of a team enabling specialty healthcare practices to thrive in a value-based care world — leveraging modern tech to make real impact. Hybrid | Competitive Benefits | Growth-Focused Culture Ready to architect the future of healthcare? Apply now or DM me for more info!
Posted 3 weeks ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 3 weeks ago
7.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. Youll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. We’re a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Senior Digital Engineer. Responsibilities may include the following and other duties may be assigned: Expertise in translating conceptual needs and business requirements into finalized architectural design. Able to manage large projects or processes that span across other collaborative teams both within and beyond Digital Technology. Operate autonomously to defines, describe, diagram and document the role and interaction of the high-level technological and human components that combine to provide cost effective and innovative solutions to meet evolving business needs. Promotes, guides and governs good architectural practice through the application of well-defined, proven technology and human interaction patterns and through architecture mentorship. Responsible for designing, developing, and maintaining scalable data pipelines, preferably using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Deep experience supporting the full lifecycle management of the entire IT portfolio including the selection, appropriate usage, enhancement and replacement of information technology applications, infrastructure and services. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. The ideal candidate is exposed to the fast-paced world of Big Data technology and has experience in building ETL/ELT data solutions using new and emerging technologies while maintaining stability of the platform. Required Knowledge and Experience: Have strong programming knowledge in Java, Scala, or Python or PySpark, SQL. 4-8 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Have experience in designing and building Enterprise Data solutions on AWS Cloud or Azure, or Google Cloud Platform (GCP). Experience with big data technologies such as Hadoop, Hive, and Kafka. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google Big Query. Familiarity with data lake architectures and data storage solutions. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Regards, Ashwini Ukekar Sourcing Specialist
Posted 3 weeks ago
4.0 - 7.0 years
6 - 12 Lacs
Pune
Hybrid
A Day in the Life Were a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Digital Engineer. We are working to improve how healthcare addresses the needs of more people, in more ways and in more places around the world. As a PySpark Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines using PySpark. You will work closely with data scientists, analysts, and other stakeholders to ensure the efficient processing and analysis of large datasets, while handling complex transformations and aggregations. Responsibilities may include the following and other duties may be assigned: Design, develop, and maintain scalable and efficient ETL pipelines using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Collaborate with data scientists and analysts to understand data requirements, review Business Requirement documents and deliver high-quality datasets. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. Stay up to date with the latest trends and technologies in big data and distributed computing. Required Knowledge and Experience: Bachelors degree in computer science, Engineering, or a related field. 4-5 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google BigQuery. Familiarity with data lake architectures and data storage solutions. Experience with big data technologies such as Hadoop, Hive, and Kafka. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Preferred Skills: Experience with Databricks. Experience with orchestration tools like Apache Airflow or AWS Step Functions. Knowledge of machine learning workflows and experience working with data scientists. Understanding of data security and governance best practices. Familiarity with streaming data platforms and real-time data processing. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. If interested, please share your updated CV on ashwini.ukekar@medtronic.com Regards, Ashwin Ukekar Sourcing Specialist Medtronic
Posted 3 weeks ago
2.0 - 4.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Fusion Plus Solutions Inc is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
3.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Tech Stalwart Solution Private Limited is looking for Sr. Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 weeks ago
8.0 - 12.0 years
10 - 20 Lacs
Chennai
Hybrid
Hi [Candidate Name], We are hiring for a Data Engineering role with a leading organization working on cutting-edge cloud and data solutions. If you're an experienced professional looking for your next challenge, this could be a great fit! Key Skills Required: Strong experience in Data Engineering and Cloud Data Pipelines Proficiency in at least 3 languages : Java, Python, Spark, Scala, SQL Hands-on with tools like Google BigQuery, Apache Kafka, Airflow, GCP Pub/Sub Knowledge of Microservices architecture , REST APIs , and DevOps tools (Docker, GitHub Actions, Terraform) Exposure to relational databases : MySQL, PostgreSQL, SQL Server Prior experience in onshore/offshore model is a plus If this sounds like a match for your profile, reply with your updated resume or apply directly. Looking forward to connecting! Best regards, Mahesh Babu M Senior Executive - Recruitment maheshbabu.muthukannan@sacha.solutions
Posted 3 weeks ago
0.0 - 3.0 years
2 - 3 Lacs
Noida
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using ETL tools * Optimize database performance through SQL tuning & modeling * Collaborate with cross-functional teams on data initiatives
Posted 3 weeks ago
7.0 - 12.0 years
18 - 25 Lacs
Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 3 weeks ago
7.0 - 12.0 years
18 - 25 Lacs
Noida, Gurugram, Bengaluru
Work from Office
JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities
Posted 3 weeks ago
7.0 - 12.0 years
3 - 7 Lacs
Gurugram
Work from Office
AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHubs, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHubs, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.
Posted 3 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience: 5-8 Years.
Posted 3 weeks ago
4.0 - 9.0 years
13 - 18 Lacs
Bengaluru
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZS’s India Capability & Expertise Center (CEC) houses more than 60% of ZS people across three offices in New Delhi, Pune and Bengaluru. Our teams work with colleagues around the world to deliver real-world solutions to the clients who drive our business. The CEC maintains standards of analytical, operational and technologic excellence to deliver superior results to our clients. ZS’s Beyond Healthcare Analytics (BHCA) Team is shaping one of the key growth vectors for ZS. Beyond Healthcare engagements are comprised of clients from industries like Quick service restaurants, Technology, Food & Beverage, Hospitality, Travel, Insurance, Consumer Products Goods & other such industries across North America, Europe & South East Asia region. BHCA India team currently has presence across New Delhi, Pune and Bengaluru offices and is continuously expanding further at a great pace. BHCA India team works with colleagues across clients and geographies to create and deliver real world pragmatic solutions leveraging AI SaaS products & platforms, Generative AI applications, and other Advanced analytics solutions at scale. WhatYou’llDo Design and implement highly available data pipelines using spark and other big data technologies Work with data science team to develop new features to increase model accuracy and performance Create standardized data models to increase standardization across client deployments Troubleshooting and resolve issues in existing ETL pipelines. Complete proofs of concept to demonstrate capabilities and connect to new data sources Instill best practices for software development, ensure designs meet requirements, and deliver high-quality work on schedule. Document application changes and development updates. WhatYou’llBring A master’s or bachelor’s degree in computer science or related field from a top university. 4+ years' overall experience; 2+ years’ experience in data engineering using Apache Spark and SQL. 2+ years of experience in building and leading a strong data engineering team. Experience with full software lifecycle methodology, including coding standards, code reviews, source control management, build processes, testing, and operations. In-depth knowledge of python, sql, pyspark, distributed computing, analytical databases and other big data technologies. Strong knowledge of one or more cloud environments such as aws, gcp, and azure. Familiarity with the data science and machine learning development process Familiarity with orchestration tools such as Apache Airflow Strong analytical skills and the ability to develop processes and methodologies. Experience working with cross-functional teams, including UX, business (e.g. Marketing, Sales), product management and/or technology/IT/engineering) is a plus. Characteristics of a forward thinker and self-starter that thrives on new challenges and adapts quickly to learning new knowledge. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com
Posted 3 weeks ago
1.0 - 6.0 years
8 - 12 Lacs
Pune
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. Data Engineer - Data Engineering & Analytics What you'll do: Create and maintain optimal data pipeline architecture. Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for scalability. Design, develop and deploy high volume ETL pipelines to manage complex and near-real time data collection. Develop and optimize SQL queries and stored procedures to meet business requirements. Design, implement, and maintain REST APIs for data interaction between systems. Ensure performance, security, and availability of databases. Handle common database procedures such as upgrade, backup, recovery, migration, etc. Collaborate with other team members and stakeholders. Prepare documentations and specifications. What you'll bring: Bachelor’s degree in computer science, Information Technology, or related field 1+ years of experience SQL, TSQL, Azure Data Factory or Synapse or relevant ETL technology. Prepare documentations and specifications. Strong analytical skills (impact/risk analysis, root cause analysis, etc.) Proven ability to work in a team environment, creating partnerships across multiple levels. Demonstrated drive for results, with appropriate attention to detail and commitment. Hands-on experience with Azure SQL Database Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com
Posted 3 weeks ago
5.0 - 10.0 years
10 - 13 Lacs
Bengaluru
Work from Office
Greetings .....!!! Requirement - Data Engineer Location - Bangalore Mandatory - Data engineer, python, sql NP - Immediate to 1week Role & responsibilities Preferred candidate profile
Posted 3 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Mysuru
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 3 weeks ago
2.0 - 5.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 3 weeks ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure 1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2Team ManagementProductivity, efficiency, absenteeism 3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience: 5-8 Years.
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Summary: We are looking for a skilled Data Engineer to join our team. The ideal candidate will be responsible for building and maintaining scalable data pipelines and ensuring efficient data flow across systems. Youll work closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Key Responsibilities: Design, build, and maintain scalable and reliable data pipelines. Develop and optimize ETL (Extract, Transform, Load) workflows. Integrate data from various sources including APIs, databases, and third-party services. Ensure data quality, integrity, and consistency across platforms. Implement data warehousing solutions using tools like Snowflake, Redshift, or BigQuery. Collaborate with data analysts and scientists to support analytics and reporting needs. Monitor and troubleshoot data pipelines to ensure performance and reliability. Document data architecture, processes, and standards. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related roles. Strong proficiency in SQL and experience with relational and non-relational databases (e.g., PostgreSQL, MongoDB). Experience with big data tools (Hadoop, Spark) and cloud platforms (AWS, GCP, Azure). Proficient in programming languages such as Python, Scala, or Java. Experience with workflow orchestration tools like Apache Airflow, Luigi, or Prefect. Familiarity with data modeling, data warehousing, and architecture best practices. Preferred Qualifications: Experience with containerization tools (Docker, Kubernetes). Knowledge of CI/CD pipelines and version control systems (e.g., Git). Exposure to real-time data processing (Kafka, Flink, etc.). Knowledge of data privacy and governance practices.
Posted 3 weeks ago
3.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions. Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options 3. Team Management a. Talent Management i. Support on boarding and training to enhance capability & effectiveness Deliver No. Performance Parameter Measure 1. Demand generation # PoC supported 2. Revenue generation through delivery Timeliness, customer success stories, customer use cases 3. Capability Building & Team Management # Skills acquired Mandatory Skills: Data Analysis.Experience3-5 Years.
Posted 3 weeks ago
6.0 - 9.0 years
15 - 20 Lacs
Chennai
Work from Office
Skills Required: Should have a minimum 6+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.
Posted 3 weeks ago
5.0 - 10.0 years
6 - 18 Lacs
Bengaluru
Work from Office
We are looking a skilled and proactive Data Engineer with hands-on experience in Azure Data Services & Microsoft Fabric. In this role, youll be responsible for building robust, scalable data pipelines and enabling enterprise grade analytic solutions.
Posted 3 weeks ago
3.0 - 5.0 years
4 - 9 Lacs
Chennai
Work from Office
Skills Required: Should have a minimum 3+ years in Data Engineering, Data Analytics platform. Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Should be involved in Requirements Gathering and transforming them to into Functionally and technical design. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Design, build and maintain batch or real-time data pipelines in production. Develop ETL/ELT Data pipeline (extract, transform, load) processes to help extract and manipulate data from multiple sources. Automate data workflows such as data ingestion, aggregation, and ETL processing and should have good experience with different types of data ingestion techniques: File-based, API-based, streaming data sources (OLTP, OLAP, ODS etc) and heterogeneous databases. Prepare raw data in Data Warehouses into a consumable dataset for both technical and nontechnical stakeholders. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehousing architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Strong experience with Python, SQL, pySpark, Scala, Shell Scripting etc. Strong experience with workflow management & Orchestration tools (Airflow, Should hold decent experience and understanding of data manipulation/wrangling techniques. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Big Data Eco-Systems, Cloudera/Hortonworks, AWS EMR etc. Snowflake Data Warehouse/Platform. Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming. Experience of working with CI/CD technologies, Git, Jenkins, Spinnaker, Ansible etc Experience building and deploying solutions to AWS Cloud. Good experience on NoSQL databases like Dynamo DB, Redis, Cassandra, MongoDB, or Neo4j etc. Experience with working on large data sets and distributed computing (e.g., Hive/Hadoop/Spark/Presto/MapReduce). Good to have working knowledge on Data Visualization tools like Tableau, Amazon QuickSight, Power BI, QlikView etc. Experience in Insurance domain preferred.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane