Jobs
Interviews

8521 Pyspark Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities Include, But Not Limited To Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 1 week ago

Apply

3.0 years

4 Lacs

Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 1 week ago

Apply

5.0 - 9.0 years

3 - 9 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate IS Architect What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Standup and enhance BI reporting capabilities through Cognos, PowerBI or similar tools. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementatio What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree with 5- 9 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experience in BI reporting tools such as Cognos, PowerBI and/or Tableau Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

8.0 - 13.0 years

3 - 6 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will work as a member of a Data Platform Engineering team that uses Cloud and Big Data technologies to craft, develop, implement and maintain solutions to support various functions like Manufacturing, Commercial, Research and Development. Roles & Responsibilities: Collaborate with Lead Architect, Business SMEs, and Data Scientists to design data solutions Serve as a Lead Engineer for technical implementation of projects including planning, architecture, design, development, testing, and deployment following agile methodologies Design and development of API services for managing Databricks resources, services & features and to support data governance applications to manage security of data assets following the standards Design and development of enterprise-level re-usable components, frameworks and services to enable data engineers Proactively work on challenging data integration problems by implementing efficient ETL patterns, frameworks for structured and unstructured data Automate and optimize data pipeline and framework for easier and efficient development process Overall management of the Enterprise Data Fabric/Lake on AWS environment to ensure that the service delivery is efficient and business SLAs around uptime, performance and capacity are met Help define guidelines, standards, strategies, security policies and change management policies to support the Enterprise Data Fabric/Lake Advice and support project teams (project managers, architects, business analysts, and developers) on cloud platforms (AWS, Databricks preferred), tools, technology, and methodology related to the design, build scalable, efficient and maintain Data Lake and other Big Data solutions Experience developing in an Agile development environment and ceremonies Familiarity with code versioning using GITLAB, and code deployment tools Mentor junior engineers and team members What we expect of you Basic Qualifications Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years in Computer Science or Engineering Must-Have Skills: Proven hands-on experience with cloud platforms—AWS (preferred), Azure, or GCP. Strong development experience with Databricks, Apache Spark, PySpark, and Apache Airflow. Proficiency in Python-based microservices development and deployment. Experience with CI/CD pipelines, containerization (Docker, Kubernetes/EKS), and infrastructure-as-code tools. Demonstrated ability to build enterprise-grade, performance-optimized data pipelines in Databricks using Python and PySpark, following best practices and standards. Solid understanding of SQL and relational/dimensional data modelling techniques. Strong analytical and problem-solving skills to address complex data engineering challenges. Familiarity with software engineering standard methodologies, including version control, automated testing, and continuous integration. Hands-on experience with key AWS services: EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, and Glue. Exposure to Agile tools such as Jira or Jira Align. Good-to-Have Skills: Experience building APIs and services for provisioning and managing AWS Databricks environments. Knowledge of Databricks SDK and REST APIs for managing workspaces, clusters, jobs, users, and permissions. Familiarity with building AI/ML solutions using Databricks-native features. Experience working with SQL/NoSQL databases and vector databases for large language model (LLM) applications. Exposure to model fine-tuning and timely engineering practices. Experience developing self-service portals using front-end frameworks like React.js. Ability to thrive in startup-like environments with minimal direction. Good communication skills to effectively present technical information to leadership and respond to collaborator inquiries. Certifications (preferred but not required): AWS Certified Data Engineer Databricks Certification SAFe Agile Certification Soft Skills: Strong analytical and problem-solving attitude with the ability to troubleshoot sophisticated data and platform issues. Exceptional communication skills—able to translate technical concepts into clear, business-relevant language for diverse audiences. Collaborative and globally minded, with experience working effectively in distributed, multi-functional teams. Self-motivated and proactive, demonstrating a high degree of ownership and initiative in driving tasks to completion. Skilled at managing multiple priorities in fast-paced environments while maintaining attention to detail and quality. Team-oriented with a growth mindset, contributing to shared goals and fostering a culture of continuous improvement. Effective time and task management, with the ability to estimate, plan, and deliver work across multiple projects while ensuring consistency and quality What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

5.0 - 9.0 years

4 - 8 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do As a Sr. Associate IS Security Engineer at Amgen, you will play a critical role in ensuring the security and protection of the company's information systems and data. You will implement security measures, conduct security audits, analyze security incidents, and provide recommendations for improvements. Your strong knowledge of security protocols, network infrastructure, and vulnerability assessment will contribute to maintaining a secure IT environment. Roles & Responsibilities: Apply patches, perform OS upgrades, manage platform end-of-life. Perform annual audits and periodic compliance reviews. Support GxP validation and documentation processes. Monitor and respond to security incidents. Correlate alerts across platforms for threat detection. Improve procedures through post-incident analysis. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processiSolid understanding of security technologies and their core functionality Experience in analyzing cybersecurity threats with up-to-date knowledge of attack vectors and the cyber threat landscape. Ability to prioritize tasks effectively and solve problems efficiently in a diverse, global team environment. Good knowledge of Windows and/or Linux systems. Experience with security alert correlation across different platforms. Experience with ServiceNow, especially CMDB, Common Service Data Model (CSDM) and IT Service Management. SQL & Database Knowledge – Experience working with relational databases, querying data, and optimizing datasets. Preferred Qualifications: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2, IAM ), Databricks (Deltalake, Unity catalog, token etc) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

5.0 - 9.0 years

5 - 7 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do The Sr Associate Software Engineer is responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, data engineers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team, and other stakeholders Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Proficiency in Python/PySpark development, Flask/Fast API, C#, ASP.net, PostgreSQL, Oracle, Databricks, DevOps Tools, CI/CD, Data Ingestion. Candidates should be able to write clean, efficient, and maintainable code. Knowledge of HTML, CSS, and JavaScript, along with popular front-end frameworks like React or Angular, is required to build interactive and responsive web applications In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Strong understanding of cloud computing principles, particularly within the AWS ecosystem Strong understanding of software development methodologies, including Agile and Scrum Experience with version control systems like Git Hands on experience with various cloud services, understand pros and cons of various cloud service in well architected cloud design principles Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experienced with API integration, serverless, microservices architecture. Experience in SQL/NOSQL database, vector database for large language models Preferred Qualifications: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Spark, or similar Experience with SAP integration technologies Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai

On-site

JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Data Platform Engineering Lead at JPMorgan Chase within Asset and Wealth Management, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS. Execute standard software solutions, design, development, and technical troubleshooting Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale. Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support. Collaborate proactively with stakeholders, users and technology teams to understand business/technical requirements and translate them into technical solutions. Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance. Implement data governance and best practices to ensure data quality and compliance with organizational standards. Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner. Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark. Proven experience with cloud platforms such as AWS, Azure, or Google Cloud. Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts. Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks. Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3. Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms. Good knowledge of SQL and NoSQL databases, including performance tuning and optimization. Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation. Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively. Experience in leading and managing a team/pod of engineers, with a proven track record of successful project delivery. Preferred qualifications, capabilities, and skills Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus. Familiarity with data visualization tools and data integration patterns. ABOUT US

Posted 1 week ago

Apply

7.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Strategy and Transactions - SaT – DnA Associate Manager EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Associate Manager - Data Engineering. The main objective of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Your key responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas driven tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical leadership to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud ( MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills and attributes for success Minimum of 7 years of total experience with 3+ years in Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas ,data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [ Relational, NoSQL, Datawarehouse ] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory,Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modeling and integration experience with relational data sources, such as SQL Server databases ,Oracle/MySQL, Azure SQL and Azure Synapse Strong in PySpark, SparkSQL Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 6 to 8 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Experience in Snowflake What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

6.0 years

6 - 10 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Strategy and Transactions - SaT– DnA Senior Analyst EY’s Data n’ Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors. The opportunity We’re looking for Senior Analyst - Data Engineering. The main objective of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also on-shore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Your key responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas driven tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical leadership to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud ( MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills and attributes for success Minimum of 6 years of total experience with 3+ years in Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas ,data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [ Relational, NoSQL, Datawarehouse ] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory,Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modeling and integration experience with relational data sources, such as SQL Server databases ,Oracle/MySQL, Azure SQL and Azure Synapse Strong in PySpark, SparkSQL Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Aggressive curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development lifecycle (SDLC) and principles of product development such as installation, upgrade and namespace management Willingness to mentor team members Solid analytical, technical and problem-solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact driven and analytically minded with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 6 to 8 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations Experience in Snowflake What we look for A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Job Title: Senior Data Developer – Azure ADF and Databricks Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About The Role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development: Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About The Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Title: Senior Data Developer – Azure ADF and Databricks Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About The Role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development: Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About The Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Engineer (Microsoft Fabric & Lakehouse) Location: Hybrid – Bangalore, India Experience: 5+ Years Joining: Immediate Key Responsibilities ● Design and build robust data pipelines using Microsoft Fabric components including Pipelines, Notebooks (PySpark), Dataflows, and Lakehouse architecture. ● Ingest and transform data from a variety of sources such as cloud platforms (Azure, AWS), on-prem databases, SaaS platforms (e.g., Salesforce, Workday), and REST/OpenAPI-based APIs. ● Develop and maintain semantic models and define standardized KPIs for reporting and analytics in Power BI or equivalent BI tools. ● Implement and manage Delta Tables across bronze/silver/gold layers using Lakehouse medallion architecture within OneLake or equivalent environments. ● Apply metadata-driven design principles to support pipeline parameterization, reusability, and scalability. ● Monitor, debug, and optimize pipeline performance; implement logging, alerting, and observability mechanisms. ● Establish and enforce data governance policies including schema versioning, data lineage tracking, role-based access control (RBAC), and audit trail mechanisms. ● Perform data quality checks including null detection, duplicate handling, schema drift management, outlier identification, and Slowly Changing Dimensions (SCD) type management. Required Skills & Qualifications- ● 5+ years of hands-on experience in Data Engineering or related fields. ● Solid understanding of data lake/lakehouse architectures, preferably with Microsoft Fabric or equivalent tools (e.g., Databricks, Snowflake, Azure Synapse). ● Strong experience with PySpark, SQL, and working with dataflows and notebooks. ● Exposure to BI tools like Power BI, Tableau, or equivalent for data consumption layers. ● Experience with Delta Lake or similar transactional storage layers. ● Familiarity with data ingestion from SaaS applications, APIs, and enterprise databases. ● Understanding of data governance, lineage, and RBAC principles. ● Strong analytical, problem-solving, and communication skills.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role Refer to responsibilities You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit -Perform other miscellaneous duties as required by management -Driving CI culture, implementing CI projects and innovation for withing the team -Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. -Develop and maintain ETL processes to load data into our big data platform. -Optimize Spark jobs and queries to improve performance and reduce processing time. -Working with product teams to communicate and translate needs into technical requirements. -Design and develop monitoring tools and processes to ensure data quality and availability. -Collaborate with other teams to integrate data processing pipelines into larger systems. -Delivering high quality code and solutions, bringing solutions into production. -Performing code reviews to optimise technical performance of data pipelines. -Continually look for how we can evolve and improve our technology, processes, and practices. -Leading group discussions on system design and architecture. -Manage and coach individuals, providing regular feedback and career development support aligned with business goals. -Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. -Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. -Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. -Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python • 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation • Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. • Experience with ETL processes and data modelling. • Problem-solving and troubleshooting skills. • Working knowledge on Oozie/Airflow. • Experience in writing unit test cases, shell scripting. • Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About The Role As part of the AI & Data organization, the Enterprise Business Intelligence (EBI) team is central to NXP’s data analytics success. We provide and maintain scalable data solutions, platforms, and methodologies that empower business users to create self-service analytics and drive data-informed decisions. We are seeking a Data Engineering Manager to lead a team of skilled Data Engineers. In this role, you will be responsible for overseeing the design, development, and maintenance of robust data pipelines and data models across multiple data platforms, including Databricks, Teradata, Postgres and others. You will collaborate closely with Product Owners, Architects, Data Scientists, and cross-functional stakeholders to ensure high-quality, secure, and scalable data solutions. Key Responsibilities Lead, mentor, and grow a team of Data Engineers, fostering a culture of innovation, collaboration, and continuous improvement. Oversee the design, development, and optimization of ETL/ELT pipelines and data workflows across multiple cloud and on-premise environments. Ensure data solutions align with enterprise architecture standards, including performance, scalability, security, privacy, and compliance. Collaborate with stakeholders to translate business requirements into technical specifications and data models. Drive adoption of best practices in data engineering, including code quality, testing, version control, and CI/CD. Partner with the Operational Support team to troubleshoot and resolve data issues and incidents. Stay current with emerging technologies and trends in data engineering and analytics. Required Skills & Qualifications Proven experience as a Data Engineer with at least 12+ years in ETL/ELT design and development. 5+ years of experience in a technical leadership or management role, with a track record of building and leading high-performing teams. Strong hands-on experience with cloud platforms (AWS, Azure) and their data services (e.g., S3, Redshift, Glue, Azure Data Factory, Synapse). Proficiency in SQL, Python, and PySpark for data transformation and processing. Experience with data orchestration tools and CI/CD pipelines (GitHub Actions, GitLab CI). Familiarity with data modeling, data warehousing, and data lake architectures. Understanding of data governance, security, and compliance frameworks (e.g., GDPR, HIPAA). Excellent communication and stakeholder management skills. Preferred Skills & Qualifications Experience with Agile methodologies and DevOps practices. Proficiency with Databricks, Teradata, Postgres, Fivetran HVR and DBT. Knowledge of AI/ML workflows and integration with data pipelines. Experience with monitoring and observability tools. Familiarity with data cataloging and metadata management tools (e.g., Alation, Collibra). More information about NXP in India...

Posted 1 week ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description R3- Senior Manager, Frontend Data Stewardship Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As a Frontend Data Stewardship Engineer you will play a pivotal role ensuring the robustness and quality of our data across all domains, which will directly influence patients who use our life-saving products. Key tasks include metadata management in Collibra, workflow design and implementation, and applying data governance principles in our Collibra platform. If you are passionate about data governance and want to make a significant impact, we encourage you to apply. Role Description As part of enterprise Data Catalog platform team, you will contribute to our success in the following areas Work with our divisional partners to onboard their data to our data catalog and help drive adoption within their teams. Understand divisional requirements and codify them within the data catalog Contribute to the development and documentation of standards for platform usage. Perform product engineering and develop automation utilities. Educate users on the platform, promoting consistent use. Who You Are Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience. Hands-on professional who has been in the technology industry for minimum 7-11 years as a Data Engineer. Experience with Collibra Data Catalog is a must have requirement demonstrated by having a Collibra Ranger certificate. Deep expertise in Collibra data catalog platform, including configuration, settings maintenance & automation, Collibra dashboard design and maintenance Experienced in workflow design, development, and maintenance using Collibra Workflow Designer Hands-on experience in implementing and configuring Collibra Data Governance, including developing metadata ingestion, metamodels, and workflows. In-depth knowledge of data governance principles, data stewardship processes, data quality concepts, and best practices. Strong experience in configuring and connecting to various data sources for metadata, data lineage, data profiling and data quality. Attention to detail and ability to produce high-quality technical documentation Strong level of SQL is a must. Knowledge of data transformation (ETL/ELT) routines. Strong understanding of REST APIs and how to use them programmatically. Knowledge of GitHub and Python is an advantage Familiar with DevOps and CI/CD Some experience with Spark/PySpark would be good. Good standard of professional communication and building working relationships with customers. Good time-management skills and ability to work independently. Innovative mindset, willingness to learn new areas and adapt to change. Strong work documentation habits with and attention to detail and accuracy. Team player spirit. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYTIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Management, Data Modeling, Quality Management Preferred Skills Job Posting End Date 08/4/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345618

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

You are looking for a Lead Data Engineer with at least 7 years of experience, who is proficient in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL. Your role will involve driving complex data solutions across various teams. It is essential that you have practical knowledge of data modeling, test-driven development, and familiarity with Agile/Waterfall methodologies. Your responsibilities will include leading projects, working collaboratively with different teams, and transforming business requirements into scalable data solutions following industry best practices in managed services or staff augmentation environments.,

Posted 1 week ago

Apply

8.0 - 89.0 years

0 Lacs

Mumbai, Maharashtra

On-site

singlePosition View All Jobs Senior AI Solutions - Vice President - Data & Analytics Engineering Mumbai, Maharashtra, Inde Apply Now Find out how well you match with this job Upload your resume Job description Employment Type Full time Job Level Vice President Posted Date Jul 27, 2025 Morgan Stanley Senior AI Solutions - Vice President - Data & Analytics Engineering Profile Description We’re seeking someone to join our team as Vice President- Senior AI Solutions. The Firmwide Data Office department is recruiting for an enthusiastic, dynamic, hands-on and delivery focused AI Solutions Engineer with a strong background in working with Generative AI(GenAI), Large Language Models (LLMs), traditional AI, and Natural Language Processing (NLP) techniques. The ideal candidate, in addition to experience in data science, will possess expertise in designing, architecting, and optimising data-intensive systems, with a keen focus on big data analytics. This role offers exciting opportunity to work on cutting-edge projects leveraging LLMs with large volumes of structured and unstructured data as well as building and integrating Knowledge Graph, LLMs and Multiagent systems. CDRR_Technology The Cybersecurity organization's mission is to create an agile, adaptable organization with the skills and expertise needed to defend against increasingly sophisticated adversaries. This will be achieved by maintaining sound capabilities to identify and protect our assets, proactively assessing threats and vulnerabilities and detecting events, ensuring resiliency through our ability to respond to and recover from incidents and building awareness and increase vigilance while continually developing our cyber workforce. Firmwide Data Office Data COE team is distributed globally between New York, London, Budapest, India, and Shanghai; and are engaged in a wide array of projects touching all business units (Institutional Securities, Investment Management, Wealth Management) and functions (e.g., Operations, Finance, Risk, Trading, Treasury, Resilience) across the Firm. The team vision is a multi-year effort to simplify firm’s data architecture and business processes front-to-back with goals of reducing infrastructure and manpower costs, improving the ability to demonstrate control of data, empowering developers by providing consistent means of handling data, facilitate data-driven insights & decision making, and providing a platform to implement future change initiatives faster, cheaper, and easier. Data & Analytics Engineering This is Vice President position that provides specialist data analysis and expertise that drive decision-making and business insights as well as crafting data pipelines, implementing data models, optimizing data processes for improved data accuracy and accessibility, including applying machine learning and AI-based techniques. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What you’ll do in the role: As a member of our team, we look first and foremost for people who are passionate about solving business problems through innovation and engineering practices. You'll be required to apply your depth of knowledge and expertise to all aspects of the software development lifecycle, as well as partner with stakeholders to stay focused on business goals. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative, trusting, thought-provoking environment-one that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. You'll combine your design and development expertise with a never-ending quest to create innovative technology through solid engineering practices. You'll work with a highly inspired and inquisitive team of technologists who are developing & delivering top quality technology products to our clients & stakeholders. Key Responsibilities Design and develop state-of-the-art GenAI and general AI solutions as well as multiagent systems to solve complex business problems. Integrate knowledge graph, LLMs and multiagent systems Leverage NLP techniques to enhance applications in language understanding, generation, and other data-driven tasks. Lead the design and architecture of scalable, efficient, and high-performance data systems that support processing of massive datasets of structured and unstructured data. Use machine learning frameworks and tools to train, fine-tune, and optimise models. Implement the best practices for model evaluation, validation, and scalability Stay up to date with the latest trends in AI, NLP, LLMs and big data technologies. Contribute to the development and implementation of new techniques that improve performance and innovation. Collaborate with cross-functional teams, including engineers, product owners, and other stakeholders to deploy AI models into production systems and deliver value to the business. Leverage a strong problem-solving mindset to identify issues, propose solutions, and conduct research to enhance the efficiency of AI and machine learning algorithms. Communicate complex model results and actionable insights to stakeholders though compelling visualizations and narratives. What you’ll bring to the role: B.E or Master’s or PhD in Computer Science, Mathematics, Engineering, Statistics or a related field Proven experience building and deploying to production GenAI models with demonstrable business value realization 12 + years of total working experience. At least 8 years' relevant experience would generally be expected to find the skills required for this role 8+ years' experience in traditional AI methodologies including deep learning, supervised and unsupervised learning, and various NLP techniques (e.g, tokenization, named entity recognition, text classification, sentiment analysis etc.) Strong proficiency in Python with deep experience using frameworks like Pandas, PySpark, TensorFlow, XGBoost Demonstrated experience dealing with big-data technologies and the ability to process, clean and analyse large-scale datasets. Experience designing and architecting high-performance, data-intensive systems that are scalable and reliable. Strong communication skills to present technical concepts and results to both technical and non-technical stakeholders. Ability to work in a team-oriented and collaborative environment. Experience with Prompt Engineering, Retrieval Augmented Generation (RAG), Vector Databases Strong understanding of multiagent architectures and experience with frameworks for agent development Knowledge of Semantic Knowledge Graphs and their integration into AI/ML workflows WHAT YOU CAN EXPECT FROM MORGAN STANLEY: We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents. Similar jobs Data Engineering Lead (Snowflake)_ Vice President _Data & Analytics Engineering Bengaluru, Karnataka, Inde + 2 more Data & Analytics Engineering Posted 14 days ago Lead AI Solutions - Vice President - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted a day ago Data Analyst Lead/Cyber Security_Vice President_Data Analytics & Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Hybrid Posted 3 months ago API Lead_Vice President_Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted 3 days ago AI/ML Software Engineer - Vice President - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted a month ago Vice President_Applied AI Research Engineer_Data & Analytics Engineering Bengaluru, Karnataka, Inde Data & Analytics Engineering Posted 16 days ago Principle Database - Vice President - Data & Analytics Engineering Bengaluru, Karnataka, Inde Data & Analytics Engineering Posted a month ago Data Engineer - Director - Data & Analytics Engineering Mumbai, Maharashtra, Inde Data & Analytics Engineering Posted a day ago Data Analytics & Reporting - Assistant Vice President Purchase, New York, United States of America Risk Management Posted 3 months ago Python/Data Engineering/Cloud_Vice President (Lead Software Engineer)_Parametric Mumbai, Maharashtra, Inde Data & Technology Posted 3 months ago

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

It is exciting to be a part of a company where the employees truly believe in the mission and values of the organization. At Fractal Analytics, we are dedicated to bringing passion and customer focus to our business operations. Our vision is to empower every human decision in the enterprise, creating a world where individual choices, freedom, and diversity are celebrated. We believe in fostering an ecosystem where human imagination plays a vital role in every decision-making process, constantly challenging ourselves to innovate and improve. We value individuals who empower imagination with intelligence, and we call them true Fractalites. We are currently seeking a Data Engineer with 2-5 years of experience to join our team in Bangalore, Gurgaon, Chennai, Coimbatore, Pune, or Mumbai. The ideal candidate will be responsible for ensuring that production-related activities are delivered within the agreed Service Level Agreements (SLAs). This role involves working on issues, bug fixes, minor changes, and collaborating with the development team when necessary to address any challenges and implement enhancements. Key Technical Skills required for this role include: - Strong proficiency in Azure Data Engineering services, specifically Azure Data Factory, Azure Databricks, and Storage (ADLS Gen 2) - Experience in Web app/App service development - Proficiency in programming languages such as Python, Pyspark, and SQL - Hands-on experience with log analytics and Application Insights - Strong expertise in Azure SQL In addition to technical skills, the following non-technical skills are mandatory: - Drive incident and problem resolution to support key operational activities - Collaborate on change ticket review, approvals, and planning with internal teams - Support the transition of projects from project teams to support teams - Serve as an escalation point for operation-related issues - Experience with ServiceNow is preferred - Strong attention to detail with a focus on quality and accuracy - Ability to manage multiple tasks with appropriate priority and time management skills - Flexibility in work content and eagerness to learn - Knowledge of service support, operation, and design processes (ITIL) - Strong relationship-building skills to collaborate with stakeholders at all levels and across organizational boundaries If you are someone who thrives in a dynamic environment and enjoys working with motivated individuals who are passionate about growth, then a career with us at Fractal Analytics may be the perfect fit for you. If this role does not align with your experience, feel free to express your interest in future opportunities by connecting with us through the Introduce Yourself feature on our website or by creating an account to receive email alerts for new job postings matching your interests.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At PwC, our focus in data and analytics is on leveraging data to drive insights and make informed business decisions. We utilize advanced analytics techniques to help clients optimize their operations and achieve strategic goals. In the field of data analysis at PwC, you will be tasked with utilizing advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. Your role will involve leveraging skills in data manipulation, visualization, and statistical modeling to support clients in solving complex business problems. Driven by curiosity, you are expected to be a reliable and contributing member of a team. In our fast-paced environment, you must be adaptable to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is viewed as an opportunity for learning and growth. You are expected to take ownership and consistently deliver quality work that drives value for our clients and contributes to the success of the team. As you progress within the Firm, you will build a brand for yourself, opening doors to more opportunities. Examples of the skills, knowledge, and experiences required to lead and deliver value at this level include but are not limited to: - Applying a learning mindset and taking ownership for your own development. - Appreciating diverse perspectives, needs, and feelings of others. - Adopting habits to sustain high performance and develop your potential. - Actively listening, asking questions to check understanding, and clearly expressing ideas. - Seeking, reflecting, acting on, and giving feedback. - Gathering information from a range of sources to analyze facts and discern patterns. - Committing to understanding how the business works and building commercial awareness. - Learning and applying professional and technical standards, upholding the Firm's code of conduct and independence requirements. Position: Senior Associate Domain: Data Science Conversational AI About Acceleration Center Mumbai/Bangalore: PwC connects people with diverse backgrounds and skill sets to solve important problems together and lead with purpose for clients, communities, and the world at large. Acceleration Centers (ACs) are diverse, global talent hubs focused on enabling growth for the organization and value creation for clients. The PwC Advisory Acceleration Center in Bangalore is part of our Advisory business in the US, focusing on developing a broader portfolio with solutions for Risk Consulting, Management Consulting, Technology Consulting, Strategy Consulting, Forensics, and vertical-specific solutions. PwC's high-performance culture is based on a passion for excellence with a focus on diversity and inclusion. Collaboration and support from a network of people are provided to help achieve goals, along with global leadership development frameworks and the latest digital technologies to facilitate learning and career advancement. The firm's philosophy revolves around caring for its people, making PwC one of the most attractive employers globally according to Universum. Commitment to Responsible Business Leadership, Diversity & Inclusion, work-life flexibility, career coaching, and learning & development contribute to making PwC one of the best places to work, learn, and excel. We are seeking an experienced senior associate with a strong analytical and Conversational AI background (minimum 4 years of overall professional experience) to join our Digital Contact Solutions team within the Analytics Consulting practice. Senior Associates will work as integral parts of business analytics teams in India, collaborating with clients and consultants in the U.S., leading teams for high-end analytics consulting engagements, and providing business recommendations to project teams. Education: Advanced Degree in a quantitative discipline such as Computer Science, Engineering, Econometrics, Statistics, or Information Sciences like business analytics or informatics. Required Skills include: - Familiarity with the Conversational AI domain, conversational design & implementation, customer experience metrics, and industry-specific challenges. - Understanding of conversational (chats, emails, and calls) data and its preprocessing for training Conversational AI systems. - Strong problem-solving and analytical skills for troubleshooting and optimizing conversational AI systems. - Familiarity with NLP/NLG techniques such as parts of speech tagging, lemmatization, canonicalization, Word2vec, sentiment analysis, topic modeling, and text classification. - Expertise in NLP and NLU verticals, including Text to Speech (TTS), Speech to Text (STT), SSML modeling, Intent Analytics, Proactive Outreach Orchestration, OmniChannel AI & IVR, Intelligent Agent Assist, Contact Center as a Service (CCaaS), Modern Data for Conversational AI, and Generative AI. - Experience building chatbots using frameworks like RASA, LUIS, DialogFlow, Lex, etc., and building NLU model pipelines using feature extraction, entity extraction, intent classification, etc. - Understanding and experience with cloud platforms and their services for building Conversational AI solutions for clients. - Expertise in Python, PySpark, R, JavaScript frameworks, and visualization tools like Power BI, Tableau, Qlikview, Spotfire. - Experience in evaluating and improving conversational AI system performance through metrics and user feedback. - Excellent communication and collaboration skills to work effectively with cross-functional teams and stakeholders. - Proven track record of successfully delivering conversational AI projects on time. - Familiarity with Agile development methodologies and version control systems. - Ability to stay updated with the latest advancements and trends in conversational AI technologies. - Strong strategic thinking and ability to align conversational AI initiatives with business goals. - Knowledge of regulatory and compliance requirements related to conversational AI applications. - Experience in the telecom industry or a similar field. - Familiarity with customer service operations and CRM systems. Nice To Have skills: - Familiarity with data wrangling tools such as Alteryx, Excel, and Relational storage. - ML modeling skills: Experience in statistical techniques like Regression, Time Series Forecasting, Classification, XGB, Clustering, Neural Networks, Simulation Modelling, etc. - Experience in survey analytics, organizational functions such as pricing, sales, marketing, operations, customer insights, etc. - Understanding of NoSQL databases for handling unstructured and semi-structured data. Roles and Responsibilities: As an Associate, you will be involved in the end-to-end project cycle, developing proposals to delivering the final product. You will support PwC leadership and lead client conversations to build suitable products. Specific responsibilities include: - Developing and executing project & analysis plans under the guidance of the project manager. - Translating business needs into data science solutions and actionable insights. - Handling client interactions regarding business problems. - Driving and conducting analysis using advanced analytics tools, coaching junior team members. - Writing and deploying production-ready code. - Building storylines and making presentations for client teams and PwC project leadership teams. - Contributing to knowledge and firm-building activities. What We Offer: - Policies around Work-from-Home and flexible working hours. - Mid-year appraisal cycle to reward performance on time. - Opportunities to solve impactful problems for clients. - Continuous learning and upskilling opportunities. - Access to Massive Online Open Courses (MOOC) at no cost. - World-class leadership guidance. - Diverse peer group support. - Interaction with senior client leadership, potential client visits, and permanent relocations as needed.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be part of our Data Engineering Team in Gurgaon, Noida, or Pune, contributing to the development and management of large enterprise data and analytics platforms. Your role will involve collaborating with the data engineering and data science teams to implement scalable data lakes, data ingestion platforms, machine learning analytics platforms, and more. With at least 3 years of industry experience, you will be responsible for creating end-to-end data solutions and optimal data processing pipelines for handling large volumes of diverse data types. Proficiency in Python, including knowledge of design patterns and strong design skills, is essential. You should have expertise in working with PySpark Dataframes, Pandas Dataframes, and developing efficient data manipulation tasks. Experience in building Restful web services, API platforms, SQL, NoSQL databases, and stream-processing systems like Spark-Streaming and Kafka will be crucial. You will collaborate with the data science and infrastructure teams to deploy machine learning solutions in production environments. It would be advantageous if you have experience with testing libraries like pytest, knowledge of Dockers, Kubernetes, model versioning with mlflow, and microservices libraries. Familiarity with machine learning algorithms and libraries is a plus. Our ideal candidate is proactive, independent, and enjoys problem-solving. Continuous learning is encouraged as we are a rapidly growing company. Being a team player and an effective communicator is essential for success in this role.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As an AI/ML Computational Science Specialist at Accenture, you will be part of the Technology for Operations team, serving as a trusted advisor and partner to Accenture Operations. Your role involves providing innovative and secure technologies to assist clients in building an intelligent operating model to drive exceptional results. Working closely with sales, offering, and delivery teams, you will identify and develop cutting-edge solutions in areas such as Application Hosting Operations (AHO), Infrastructure Management (ISMT), and Intelligent Automation. With 7 to 11 years of experience and a background in Any Graduation/Post Graduate Diploma in Management, you will be expected to have a strong grasp of Artificial Intelligence (AI) principles, concepts, techniques, and tools. Proficiency in Python programming, PySpark, Microsoft SQL Server, and Microsoft SQL Server Integration Services (SSIS) is essential for this role. Your responsibilities will include analyzing and solving moderately complex problems, developing new solutions, and aligning with the strategic direction set by senior management. Effective written and verbal communication, teamwork skills, numerical ability, and results orientation are key attributes required for success in this position. In this role, you may interact with peers, management levels, and clients both at Accenture and externally. You may also be required to manage small teams or work efforts. Flexibility to work in rotational shifts is a possibility. Accenture is a global professional services company with expertise in digital, cloud, and security services across various industries. Join our team of over 699,000 professionals worldwide to deliver innovative solutions and drive value for clients, shareholders, and communities. Visit www.accenture.com to learn more about our offerings and impact. If you are passionate about leveraging technology and human ingenuity to create value and drive change, we invite you to explore this exciting opportunity to be a part of our Technology for Operations team as an AI/ML Computational Science Specialist at Accenture.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

ahmedabad, gujarat

On-site

YipitData is a leading market research and analytics firm specializing in the disruptive economy, having recently secured a significant investment from The Carlyle Group valued over $1B. Recognized for three consecutive years as one of Inc's Best Workplaces, we are a rapidly expanding technology company with offices across various locations globally, fostering a culture centered on mastery, ownership, and transparency. As a potential candidate, you will have the opportunity to collaborate with strategic engineering leaders and report directly to the Director of Data Engineering. This role involves contributing to the establishment of our Data Engineering team presence in India and working within a global team framework, tackling challenging big data problems. We are currently in search of a highly skilled Senior Data Engineer with 6-8 years of relevant experience to join our dynamic Data Engineering team. The ideal candidate should possess a solid grasp of Spark and SQL, along with experience in data pipeline development. Successful candidates will play a vital role in expanding our data engineering team, focusing on enhancing reliability, efficiency, and performance within our strategic pipelines. The Data Engineering team at YipitData sets the standard for all other analyst teams, maintaining and developing the core pipelines and tools that drive our products. This team plays a crucial role in supporting the rapid growth of our business and presents a unique opportunity for the first hire to potentially lead and shape the team as responsibilities evolve. This hybrid role will be based in India, with training and onboarding requiring overlap with US working hours initially. Subsequently, standard IST working hours are permissible, with occasional meetings with the US team. As a Senior Data Engineer at YipitData, you will work directly under the Senior Manager of Data Engineering, receiving hands-on training on cutting-edge data tools and techniques. Responsibilities include building and maintaining end-to-end data pipelines, establishing best practices for data modeling and pipeline construction, generating documentation and training materials, and proficiently resolving complex data pipeline issues using PySpark and SQL. Collaboration with stakeholders to integrate business logic into central pipelines and mastering tools like Databricks, Spark, and other ETL technologies is also a key aspect of the role. Successful candidates are likely to have a Bachelor's or Master's degree in Computer Science, STEM, or a related field, with at least 6 years of experience in Data Engineering or similar technical roles. An enthusiasm for problem-solving, continuous learning, and a strong understanding of data manipulation and pipeline development are essential. Proficiency in working with large datasets using PySpark, Delta, and Databricks, aligning data transformations with business needs, and a willingness to acquire new skills are crucial for success. Effective communication skills, a proactive approach, and the ability to work collaboratively with stakeholders are highly valued. In addition to a competitive salary, YipitData offers a comprehensive compensation package that includes various benefits, perks, and opportunities for personal and professional growth. Employees are encouraged to focus on their impact, self-improvement, and skill mastery in an environment that promotes ownership, respect, and trust.,

Posted 1 week ago

Apply

2.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Intermediate Programmer Analyst role at our organization involves actively participating in the establishment and implementation of new or revised application systems and programs in collaboration with the Technology team. Your primary objective will be to contribute to applications systems analysis and programming activities. You will be expected to utilize your knowledge of applications development procedures and concepts, along with a basic understanding of other technical areas, to identify and define necessary system enhancements. This includes leveraging script tools, analyzing, and interpreting code. Additionally, you will consult with users, clients, and other technology groups on issues, recommend programming solutions, and provide installation and support for customer exposure systems. As an Intermediate Programmer Analyst, you will apply your fundamental knowledge of programming languages to create design specifications and analyze applications to detect vulnerabilities and security issues. Testing and debugging will also be part of your responsibilities. Furthermore, you will serve as an advisor or coach to new or lower-level analysts, identify problems, analyze information, and make evaluative judgments to recommend and implement solutions. In this role, you will need to resolve issues by selecting solutions based on your technical experience, guided by precedents. You should be able to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a subject matter expert to senior stakeholders and/or other team members. It is essential to appropriately assess risk when making business decisions, with a focus on safeguarding the firm's reputation and ensuring compliance with applicable laws and regulations. This includes adhering to policies, demonstrating ethical judgment in personal behavior and business practices, and reporting control issues transparently. Qualifications: - 4-8 years of relevant experience in Data Analytics or Big Data - Hands-on experience with SQL, Python, and Pyspark, including Spark components - 2-4 years of experience as a Big Data Engineer developing, optimizing, and managing large-scale data processing systems and analytics platforms - 4 years of experience in distributed data processing and near real-time data analytics using PySpark - ETL experience is preferred over Abinitio - Strong understanding of PySpark execution plans, partitioning, and optimization techniques Education: - Bachelor's degree or equivalent experience This is a full-time position within the Technology job family group, specifically in the Applications Development job family. If you possess the necessary skills and experience, we encourage you to apply and become part of our dynamic team.,

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies