Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 7.0 years
15 - 17 Lacs
India
On-site
About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure
Posted 1 week ago
7.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure
Posted 1 week ago
5.0 - 9.0 years
3 - 9 Lacs
No locations specified
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate IS Architect What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Standup and enhance BI reporting capabilities through Cognos, PowerBI or similar tools. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementatio What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree with 5- 9 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experience in BI reporting tools such as Cognos, PowerBI and/or Tableau Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
8.0 - 13.0 years
3 - 6 Lacs
No locations specified
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will work as a member of a Data Platform Engineering team that uses Cloud and Big Data technologies to craft, develop, implement and maintain solutions to support various functions like Manufacturing, Commercial, Research and Development. Roles & Responsibilities: Collaborate with Lead Architect, Business SMEs, and Data Scientists to design data solutions Serve as a Lead Engineer for technical implementation of projects including planning, architecture, design, development, testing, and deployment following agile methodologies Design and development of API services for managing Databricks resources, services & features and to support data governance applications to manage security of data assets following the standards Design and development of enterprise-level re-usable components, frameworks and services to enable data engineers Proactively work on challenging data integration problems by implementing efficient ETL patterns, frameworks for structured and unstructured data Automate and optimize data pipeline and framework for easier and efficient development process Overall management of the Enterprise Data Fabric/Lake on AWS environment to ensure that the service delivery is efficient and business SLAs around uptime, performance and capacity are met Help define guidelines, standards, strategies, security policies and change management policies to support the Enterprise Data Fabric/Lake Advice and support project teams (project managers, architects, business analysts, and developers) on cloud platforms (AWS, Databricks preferred), tools, technology, and methodology related to the design, build scalable, efficient and maintain Data Lake and other Big Data solutions Experience developing in an Agile development environment and ceremonies Familiarity with code versioning using GITLAB, and code deployment tools Mentor junior engineers and team members What we expect of you Basic Qualifications Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years in Computer Science or Engineering Must-Have Skills: Proven hands-on experience with cloud platforms—AWS (preferred), Azure, or GCP. Strong development experience with Databricks, Apache Spark, PySpark, and Apache Airflow. Proficiency in Python-based microservices development and deployment. Experience with CI/CD pipelines, containerization (Docker, Kubernetes/EKS), and infrastructure-as-code tools. Demonstrated ability to build enterprise-grade, performance-optimized data pipelines in Databricks using Python and PySpark, following best practices and standards. Solid understanding of SQL and relational/dimensional data modelling techniques. Strong analytical and problem-solving skills to address complex data engineering challenges. Familiarity with software engineering standard methodologies, including version control, automated testing, and continuous integration. Hands-on experience with key AWS services: EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, and Glue. Exposure to Agile tools such as Jira or Jira Align. Good-to-Have Skills: Experience building APIs and services for provisioning and managing AWS Databricks environments. Knowledge of Databricks SDK and REST APIs for managing workspaces, clusters, jobs, users, and permissions. Familiarity with building AI/ML solutions using Databricks-native features. Experience working with SQL/NoSQL databases and vector databases for large language model (LLM) applications. Exposure to model fine-tuning and timely engineering practices. Experience developing self-service portals using front-end frameworks like React.js. Ability to thrive in startup-like environments with minimal direction. Good communication skills to effectively present technical information to leadership and respond to collaborator inquiries. Certifications (preferred but not required): AWS Certified Data Engineer Databricks Certification SAFe Agile Certification Soft Skills: Strong analytical and problem-solving attitude with the ability to troubleshoot sophisticated data and platform issues. Exceptional communication skills—able to translate technical concepts into clear, business-relevant language for diverse audiences. Collaborative and globally minded, with experience working effectively in distributed, multi-functional teams. Self-motivated and proactive, demonstrating a high degree of ownership and initiative in driving tasks to completion. Skilled at managing multiple priorities in fast-paced environments while maintaining attention to detail and quality. Team-oriented with a growth mindset, contributing to shared goals and fostering a culture of continuous improvement. Effective time and task management, with the ability to estimate, plan, and deliver work across multiple projects while ensuring consistency and quality What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
4 - 8 Lacs
No locations specified
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Job Description As a Sr. Associate BI Engineer, you will support the development and delivery of data-driven solutions that enable business insights and operational efficiency. You will work closely with senior data engineers, analysts, and stakeholders to support and build dashboards, analyze data, and contribute to the design of scalable reporting systems. This is an ideal role for early-career professionals looking to grow their technical and analytical skills in a collaborative environment Roles & Responsibilities: Designing and maintaining dashboards and reports using tools like Spotfire , Power BI, Tableau. Perform data analysis to identify trends and support business decisions. Gather BI requirements and translate them into technical specifications. Support data validation, testing, and documentation efforts. Apply best practices in data modeling, visualization, and BI development. Participate in Agile ceremonies and contribute to sprint planning and backlog grooming What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Must have strong knowledge of Spotfire. Exposure to other data visualization tools such as Power BI, Tableau, or QuickSight. Proficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Understanding of data structures and reporting concepts Strong analytical and problem-solving skills Preferred Qualifications: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2, IAM ), Databricks (Deltalake, Unity catalog, token etc) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
4 - 8 Lacs
No locations specified
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do As a Sr. Associate IS Security Engineer at Amgen, you will play a critical role in ensuring the security and protection of the company's information systems and data. You will implement security measures, conduct security audits, analyze security incidents, and provide recommendations for improvements. Your strong knowledge of security protocols, network infrastructure, and vulnerability assessment will contribute to maintaining a secure IT environment. Roles & Responsibilities: Apply patches, perform OS upgrades, manage platform end-of-life. Perform annual audits and periodic compliance reviews. Support GxP validation and documentation processes. Monitor and respond to security incidents. Correlate alerts across platforms for threat detection. Improve procedures through post-incident analysis. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processiSolid understanding of security technologies and their core functionality Experience in analyzing cybersecurity threats with up-to-date knowledge of attack vectors and the cyber threat landscape. Ability to prioritize tasks effectively and solve problems efficiently in a diverse, global team environment. Good knowledge of Windows and/or Linux systems. Experience with security alert correlation across different platforms. Experience with ServiceNow, especially CMDB, Common Service Data Model (CSDM) and IT Service Management. SQL & Database Knowledge – Experience working with relational databases, querying data, and optimizing datasets. Preferred Qualifications: Familiarity with Cloud services like AWS (e.g., Redshift, S3, EC2, IAM ), Databricks (Deltalake, Unity catalog, token etc) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 years
0 Lacs
Mumbai
On-site
JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Data Platform Engineering Lead at JPMorgan Chase within Asset and Wealth Management, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS. Execute standard software solutions, design, development, and technical troubleshooting Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale. Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support. Collaborate proactively with stakeholders, users and technology teams to understand business/technical requirements and translate them into technical solutions. Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance. Implement data governance and best practices to ensure data quality and compliance with organizational standards. Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner. Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark. Proven experience with cloud platforms such as AWS, Azure, or Google Cloud. Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts. Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks. Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3. Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms. Good knowledge of SQL and NoSQL databases, including performance tuning and optimization. Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation. Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively. Experience in leading and managing a team/pod of engineers, with a proven track record of successful project delivery. Preferred qualifications, capabilities, and skills Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus. Familiarity with data visualization tools and data integration patterns. ABOUT US
Posted 1 week ago
3.0 years
6 - 6 Lacs
Bengaluru
On-site
DESCRIPTION Are you passionate about data and code? Does the prospect of dealing with mission-critical data excite you? Do you want to build data engineering solutions that process a broad range of business and customer data? Do you want to continuously improve the systems that enable annual worldwide revenue of hundreds of billions of dollars? If so, then the eCommerce Services (eCS) team is for you! In eCommerce Services (eCS), we build systems that span the full range of eCommerce functionality, from Privacy, Identity, Purchase Experience and Ordering to Shipping, Tax and Financial integration. eCommerce Services manages several aspects of the customer life cycle, starting from account creation and sign in, to placing items in the shopping cart, proceeding through checkout, order processing, managing order history and post-fulfillment actions such as refunds and tax invoices. eCS services determine sales tax and shipping charges, and we ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon’s customers. The goal of the eCS Data Engineering and Analytics team is to provide high quality, on-time reports to Amazon business teams, enabling them to expand globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel. As a Data Engineer, you will own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use cases. You will use your strong business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data and use it to deliver the data as service which will have an immediate influence on day-to-day decision making. Key job responsibilities Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (DataNet, Cradle, Quick Sight etc. Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness, coverage, monitoring and security. Develop new data models and end to data pipelines. Create and implement Data Governance strategy for mitigating privacy and security risks. BASIC QUALIFICATIONS 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Bachelor's degree PREFERRED QUALIFICATIONS Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, KA, Bangalore Data Science
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary About Guidewire Guidewire is the platform P&C insurers trust to engage, innovate, and grow efficiently. We combine digital, core, analytics, and AI to deliver our platform as a cloud service. More than 540+ insurers in 40 countries, from new ventures to the largest and most complex in the world, run on Guidewire. As a partner to our customers, we continually evolve to enable their success. We are proud of our unparalleled implementation track record with 1600+ successful projects, supported by the largest R&D team and partner ecosystem in the industry. Our Marketplace provides hundreds of applications that accelerate integration, localization, and innovation. Guidewire Software, Inc. is proud to be an equal opportunity and affirmative action employer. We are committed to an inclusive workplace, and believe that a diversity of perspectives, abilities, and cultures is a key to our success. Qualified applicants will receive consideration without regard to race, color, ancestry, religion, sex, national origin, citizenship, marital status, age, sexual orientation, gender identity, gender expression, veteran status, or disability. All offers are contingent upon passing a criminal history and other background checks where it's applicable to the position. Responsibilities Job Description Design and Development: Design, and develop robust, scalable, and efficient data pipelines. Design and manage platform solutions to support data engineering needs to ensure seamless integration and performance. Write clean, efficient, and maintainable code. Data Management and Optimization: Ensure data quality, integrity, and security across all data pipelines. Optimize data processing workflows for performance and cost-efficiency. Develop and maintain comprehensive documentation for data pipelines and related processes. Innovation and Continuous Improvement: Stay current with emerging technologies and industry trends in big data and cloud computing. Propose and implement innovative solutions to improve data processing and analytics capabilities. Continuously evaluate and improve existing data infrastructure and processes. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in software engineering with a focus on data engineering and building data platform Strong programming experience using Python or Java. Proven experience with Big data technologies like Apache Spark, Amazon EMR, Apache Iceberg, Amazon Redshift, etc or Similar technologies Proven experience in RDBMS(Postgres, MySql, etc) and NoSQL(MongoDB, DynamoDB, etc) database Proficient in AWS cloud services (e.g., Lambda, S3, Athena, Glue) or comparable cloud technologies. In-depth understanding of SDLC best practices, including Agile methodologies, code reviews, and CI/CD. Experience working in Event driven and Serverless Architecture Experience with platform solutions and containerization technologies (e.g., Docker, Kubernetes). Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication skills, both written and verbal. Why Join Us Opportunity to work with cutting-edge technologies and innovative projects. Collaborative and inclusive work environment. Competitive salary and benefits package. Professional development opportunities and career growth. About Guidewire Guidewire is the platform P&C insurers trust to engage, innovate, and grow efficiently. We combine digital, core, analytics, and AI to deliver our platform as a cloud service. More than 540+ insurers in 40 countries, from new ventures to the largest and most complex in the world, run on Guidewire. As a partner to our customers, we continually evolve to enable their success. We are proud of our unparalleled implementation track record with 1600+ successful projects, supported by the largest R&D team and partner ecosystem in the industry. Our Marketplace provides hundreds of applications that accelerate integration, localization, and innovation. For more information, please visit www.guidewire.com and follow us on Twitter: @Guidewire_PandC. Guidewire Software, Inc. is proud to be an equal opportunity and affirmative action employer. We are committed to an inclusive workplace, and believe that a diversity of perspectives, abilities, and cultures is a key to our success. Qualified applicants will receive consideration without regard to race, color, ancestry, religion, sex, national origin, citizenship, marital status, age, sexual orientation, gender identity, gender expression, veteran status, or disability. All offers are contingent upon passing a criminal history and other background checks where it's applicable to the position.
Posted 1 week ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role As part of the AI & Data organization, the Enterprise Business Intelligence (EBI) team is central to NXP’s data analytics success. We provide and maintain scalable data solutions, platforms, and methodologies that empower business users to create self-service analytics and drive data-informed decisions. We are seeking a Data Engineering Manager to lead a team of skilled Data Engineers. In this role, you will be responsible for overseeing the design, development, and maintenance of robust data pipelines and data models across multiple data platforms, including Databricks, Teradata, Postgres and others. You will collaborate closely with Product Owners, Architects, Data Scientists, and cross-functional stakeholders to ensure high-quality, secure, and scalable data solutions. Key Responsibilities Lead, mentor, and grow a team of Data Engineers, fostering a culture of innovation, collaboration, and continuous improvement. Oversee the design, development, and optimization of ETL/ELT pipelines and data workflows across multiple cloud and on-premise environments. Ensure data solutions align with enterprise architecture standards, including performance, scalability, security, privacy, and compliance. Collaborate with stakeholders to translate business requirements into technical specifications and data models. Drive adoption of best practices in data engineering, including code quality, testing, version control, and CI/CD. Partner with the Operational Support team to troubleshoot and resolve data issues and incidents. Stay current with emerging technologies and trends in data engineering and analytics. Required Skills & Qualifications Proven experience as a Data Engineer with at least 12+ years in ETL/ELT design and development. 5+ years of experience in a technical leadership or management role, with a track record of building and leading high-performing teams. Strong hands-on experience with cloud platforms (AWS, Azure) and their data services (e.g., S3, Redshift, Glue, Azure Data Factory, Synapse). Proficiency in SQL, Python, and PySpark for data transformation and processing. Experience with data orchestration tools and CI/CD pipelines (GitHub Actions, GitLab CI). Familiarity with data modeling, data warehousing, and data lake architectures. Understanding of data governance, security, and compliance frameworks (e.g., GDPR, HIPAA). Excellent communication and stakeholder management skills. Preferred Skills & Qualifications Experience with Agile methodologies and DevOps practices. Proficiency with Databricks, Teradata, Postgres, Fivetran HVR and DBT. Knowledge of AI/ML workflows and integration with data pipelines. Experience with monitoring and observability tools. Familiarity with data cataloging and metadata management tools (e.g., Alation, Collibra). More information about NXP in India...
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Company Description At Redshift Solution, we empower businesses to scale faster, operate smarter, and serve their customers better through integrated digital solutions. We specialize in Odoo ERP implementation, chargeback management, eCommerce solutions, and digital marketing. Whether you're a startup or an established brand, we bring strategy, technology, and creativity together to solve real business challenges, enabling growth. We serve clients across industries with a focus on results, transparency, and long-term value. Role Description This is an internship role for a Digital Marketing Intern. The Digital Marketing Intern will be responsible for assisting in the development and execution of performance-driven digital marketing campaigns. Day-to-day tasks include social media management, web analytics, content creation, and supporting online marketing initiatives. This is a hybrid role based in Nagpur, with some work-from-home flexibility. Qualifications Skills in Social Media Marketing and Online Marketing Proficiency in Digital Marketing and Web Analytics Strong Communication skills Knowledge of ppc and pla. Ability to work both independently and as part of a team Currently pursuing a degree in Marketing, Communications, Business, or a related field Previous internship or project experience in digital marketing is a plus. NOTE- 1-Candidate should be from NAGPUR only 2-There will be paid stipend for Internship programmme. 3-If candidate performs exceptionally well,will be hired on company payroll within 60 days.
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
The ideal candidate for this position should have previous experience in building data science/algorithms based products, which would be a significant advantage. Experience in handling healthcare data is also desired. An educational qualification of Bachelors/Masters in computer science/Data Science or related subjects from a reputable institution is required. With a typical experience of 7-9 years in the industry, the candidate should have a strong background in developing data science models and solutions. The ability to quickly adapt to new programming languages, technologies, and frameworks is essential. A deep understanding of data structures and algorithms is necessary. The candidate should also have a proven track record of implementing end-to-end data science modeling projects and providing guidance and thought leadership to the team. Experience in a consulting environment with a hands-on attitude is preferred. As a Data Science Lead, the primary responsibility will be to lead a team of analysts, data scientists, and engineers to deliver end-to-end solutions for pharmaceutical clients. The candidate is expected to participate in client proposal discussions with senior stakeholders and provide technical thought leadership. Expertise in all phases of model development, including exploratory data analysis, hypothesis testing, feature creation, dimension reduction, model training, selection, validation, and deployment, is required. A deep understanding of statistical and machine learning methods such as logistic regression, SVM, decision tree, random forest, neural network, and regression is essential. Mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and their practical implementation to solve business problems is necessary. The candidate should also be able to implement ML models in an optimized and sustainable framework and gain business understanding in the healthcare domain to develop relevant analytics use cases. In terms of technical skills, the candidate should have expert-level proficiency in programming languages like Python/SQL, along with working knowledge of relational SQL and NoSQL databases such as Postgres and Redshift. Extensive knowledge of predictive and machine learning models, NLP techniques, deep learning, and unsupervised learning is required. Familiarity with data structures, pre-processing, feature engineering, sampling techniques, and statistical analysis is important. Exposure to open-source tools, cloud platforms like AWS and Azure, and AI tools like LLM models and visualization tools like Tableau and PowerBI is preferred. If you do not meet every job requirement, the company encourages candidates to apply anyway, as they are dedicated to building a diverse, inclusive, and authentic workplace. Your excitement for the role and potential fit may make you the right candidate for this position or others within the company.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Are you a passionate 3D artist with over 5 years of experience seeking to channel your creativity into captivating projects in Games, VR/AR, and Product Visualization Join the vibrant Creative / VFX / Animation team at P99Soft located in Pune, Maharashtra. As a Senior 3D Generalist, your main responsibility will be to produce top-notch 3D assets encompassing modeling, texturing, shading, lighting, and rendering. You will be involved in a variety of projects spanning games, AR/VR, and visualization, where you will optimize assets for performance while maintaining high quality standards. Additionally, troubleshooting technical issues throughout the 3D pipeline will be part of your core duties. Key Requirements: - Proficiency in Blender, ZBrush, Substance Painter, and Adobe Suite. - Strong understanding of PBR workflows, UV mapping, and texture baking. - Hands-on experience with rendering engines such as Arnold, V-Ray, Redshift, or Octane. - Sound knowledge of anatomy, lighting, composition, and color theory. - Familiarity with real-time engines like Unreal Engine or Unity is advantageous. Desirable Skills: - Background in rigging and animation. If you are eager to elevate your 3D expertise to new heights and contribute to cutting-edge projects, we are excited to explore the possibilities with you! Submit your resume and portfolio to pooja.yadav@p99soft.com to apply for this full-time position in Pune, Maharashtra. Let's collaborate and craft something extraordinary together! #hiring #3Dgeneralist #vfx #animation #gamedev #ar #vr #blender #substancepainter #joinourteam #punejobs #creativecareers,
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role : Data Engineer (ETL SQL, Data modeling with AWS/Azure) Location: Bangalore Full/ Part-time: Full Time. Build a Career With Confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do. About The Role Looking for SQL Developer with ETL background and AWS OR Azure cloud platform experience. Job Description Design, develop, and implement scalable and efficient data warehouse solutions on cloud platforms using Azure Fabric, AWS Redshift etc, Create and optimize data models to support business reporting and analytical needs. Integration using ETL Tools like Azure Data Factory etc. Write complex SQL queries, stored procedures, and functions for data manipulation and analysis. Implement data quality checks and validation processes to ensure data accuracy and integrity. Monitor and optimize data warehouse performance, including query tuning, indexing, and data partitioning strategies. Identify and troubleshoot data-related issues, ensuring data availability and reliability. Collaborate with data architects, data engineers, business analysts, and other stakeholders to understand data requirements and translate them into technical solutions. Analytical Skills: Strong problem-solving, analytical, and critical thinking skills. Preferred Skills & Tools For This Role Are Experience of 7 to 10 years in the below mentioned skill sets Cloud Platforms: Azure (Data Factory, Azure Fabric, SQL DB, Data Lake), AWS (RedShift)—Any Azure tools OR AWS Databases: Postgres SQL OR MSSQL ETL Tools: Azure Data Factory OR Any ETL Tool Experience.- Languages: Expert level proficiency in T-SQL, Python.—TSQL AND PYTHON BI Tools: Power BI or similar—POWERBI OR TABLEAU OR SPOTFIRE Version Control & DevOps: Azure DevOps, Git.—any of these is preferred Benefits We are committed to offering competitive benefits programs for all our employees and enhancing our programs when necessary. Make yourself a priority with flexible schedules, parental leave. Drive forward your career through professional development opportunities. Achieve your personal goals with our Employee Assistance Programme. Our Commitment To You Our greatest assets are the expertise, creativity, and passion of our employees. We strive to provide a great place to work that attracts, develops, and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback, and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Technologies - AWS Redshift DBA or AWS Aurora Middleware Admin - Weblogic A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Summary As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities for Data Platforms and Data Engineering Community. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Serves as the Subject Matter Expert on Data & Analytics Solutions. Accountable for delivering high quality, data products and analytic ready data solutions. Develop and maintain ETL/ELT pipelines for ingesting data from various sources into our data warehouse. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing. Implement and maintain security protocols to protect sensitive data. Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Accountable for evaluating Data enhancements and initiatives, assessing capacity and prioritization along with onshore and vendor teams. Knowledgeable in evolving trends in Data platforms and Product based implementation Manage and provide guidance for the data engineers supporting projects, enhancements, and break/fix efforts. Has end-to-end ownership mindset in driving initiatives through completion Comfortable working in a fast-paced environment with minimal oversight Mentors and provide career guidance to other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Provides strategic feedback to vendors on service delivery and balances workload with vendor teams. Qualifications & Experience Hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML. Ability to craft and architect data solutions, automation pipelines to productionize solutions. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Glue, Redshift, Athena, lakeformation, etc.). Cloudera Data Platform, Tableau labs is a plus. Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, PySpark, R, PyTorch, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or GCP (Preferably strong in AWS) Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. AWS Data Engineering/Analytics certification is a plus. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Are you passionate about data and code? Does the prospect of dealing with mission-critical data excite you? Do you want to build data engineering solutions that process a broad range of business and customer data? Do you want to continuously improve the systems that enable annual worldwide revenue of hundreds of billions of dollars? If so, then the eCommerce Services (eCS) team is for you! In eCommerce Services (eCS), we build systems that span the full range of eCommerce functionality, from Privacy, Identity, Purchase Experience and Ordering to Shipping, Tax and Financial integration. eCommerce Services manages several aspects of the customer life cycle, starting from account creation and sign in, to placing items in the shopping cart, proceeding through checkout, order processing, managing order history and post-fulfillment actions such as refunds and tax invoices. eCS services determine sales tax and shipping charges, and we ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon’s customers. The goal of the eCS Data Engineering and Analytics team is to provide high quality, on-time reports to Amazon business teams, enabling them to expand globally at scale. Our team has a direct impact on retail CX, a key component that runs our Amazon fly wheel. As a Data Engineer, you will own the architecture of DW solutions for the Enterprise using multiple platforms. You would have the opportunity to lead the design, creation and management of extremely large datasets working backwards from business use cases. You will use your strong business and communication skills to be able to work with business analysts and engineers to determine how best to design the data warehouse for reporting and analytics. You will be responsible for designing and implementing scalable ETL processes in the data warehouse platform to support the rapidly growing and dynamic business demand for data and use it to deliver the data as service which will have an immediate influence on day-to-day decision making. Key job responsibilities Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (DataNet, Cradle, Quick Sight etc. Improve existing solutions and come up with next generation Data Architecture to improve scale, quality, timeliness, coverage, monitoring and security. Develop new data models and end to data pipelines. Create and implement Data Governance strategy for mitigating privacy and security risks. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Bachelor's degree Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3044017
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. As a Data Engineer, exclusively focus on engineering data pipelines for complex products What’s in it for you? A robust distributed platform to manage a self-healing swarm of bots onunreliable network / compute Large-scale Cloud-Native applications Document Comprehension Engine leveraging RNN and other latest OCR techniques Completely data-driven low-code platform You will leverage cutting-edge technologies like Blockchain, IoT, and Data Science as you work on projects for leading Silicon Valley startups. Your role does not start or end with just Java development; you will enjoy the freedom to share your suggestions on the choice of tech stacks across the length of the project If there is a certain technology you would like to explore, you can do your Technical PoCs Work in a culture that values capability over experience and continuous learning as a core tenet Here’s what you’ll bring Proficiency in SQL and experience with database technologies (e.g., MySQL, PostgreSQL, SQL Server).Experience in any one of the cloud environments – AWS, Azure Experience with data modeling, data warehousing, and building ETL pipelines. Experience building large-scale data pipelines and data-centric applications using any distributed storage platform Experience in data processing tools like Pandas, pyspark. Experience in cloud services like S3, Lambda, SQS, Redshift, Azure Data Factory, ADLS, Function Apps, etc. Expertise in one or more high-level languages (Python/Scala) Ability to handle large-scale structured and unstructured data from internal and third-party sources Ability to collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility, and foster data-driven decision-making across the organization Experience with data visualization tools like PowerBI, Tableau Experience in containerization technologies like Docker , Kubernetes About Us Ideas2IT stands at the intersection of Technology, Business, and Product Engineering, offering high-caliber Product Development services. Initially conceived as a CTO consulting firm, we've evolved into thought leaders in cutting-edge technologies such as Generative AI, assisting our clients in embracing innovation. Our forte lies in applying technology to address business needs, demonstrated by our track record of developing AI-driven solutions for industry giants like Facebook, Bloomberg, Siemens, Roche, and others. Harnessing our product-centric approach, we've incubated several AI-based startups—including Pipecandy, Element5, IdeaRx, and Carefi. in—that have flourished into successful ventures backed by venture capital. With fourteen years of remarkable growth behind us, we're steadfast in pursuing ambitious objectives. P.S. We're all about diversity, and our doors are wide open to everyone. Join us in celebrating the awesomeness of differences!
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role As a Data Engineer, you'll build and maintain data pipelines and architectures.Responsibilities include optimizing databases and ETL processes, using Python or SQL,and collaborating with data teams for informed decision-making. Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. As a Data Engineer, exclusively focus on engineering data pipelines for complex products What’s in it for you? A robust distributed platform to manage a self-healing swarm of bots onunreliable network / compute Large-scale Cloud-Native applications Document Comprehension Engine leveraging RNN and other latest OCR techniques Completely data-driven low-code platform You will leverage cutting-edge technologies like Blockchain, IoT, and Data Science as you work on projects for leading Silicon Valley startups. Your role does not start or end with just Java development; you will enjoy the freedom to share your suggestions on the choice of tech stacks across the length of the project If there is a certain technology you would like to explore, you can do your Technical PoCs Work in a culture that values capability over experience and continuous learning as a core tenet Here’s what you’ll bring Proficiency in SQL and experience with database technologies (e.g., MySQL, PostgreSQL, SQL Server).Experience in any one of the cloud environments – AWS, Azure Experience with data modeling, data warehousing, and building ETL pipelines. Experience building large-scale data pipelines and data-centric applications using any distributed storage platform Experience in data processing tools like Pandas, pyspark. Experience in cloud services like S3, Lambda, SQS, Redshift, Azure Data Factory, ADLS, Function Apps, etc. Expertise in one or more high-level languages (Python/Scala) Ability to handle large-scale structured and unstructured data from internal and third-party sources Ability to collaborate with analytics and business teams to improve data models that feed business intelligence tools, increase data accessibility, and foster data-driven decision-making across the organization Experience with data visualization tools like PowerBI, Tableau Experience in containerization technologies like Docker , Kubernetes
Posted 1 week ago
12.0 - 17.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Principal IS Bus Sys Analyst, Neural Nexus What You Will Do Let’s do this. Let’s change the world. In this vital role you will support the delivery of emerging AI/ML capabilities within the Commercial organization as a leader in Amgen's Neural Nexus program. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with the Commercial Data & Analytics (CD&A) team to help realize business value through the application of commercial data and emerging AI/ML technologies. Serve as the technology product owner for the launch and growth of the Neural Nexus product teams focused on data connectivity, predictive modeling, and fast-cycle value delivery for commercial teams. Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps. Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience. Become the subject matter expert in emerging technology capabilities by researching and implementing new tools and features, internal and external methodologies. Build expertise and domain expertise in a wide variety of Commercial data domains. Provide input for governance discussions and help prepare materials to support executive alignment on technology strategy and investment. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 12 to 17 years information system experience. Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Good interpersonal skills, good attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Good understanding of sales and incentive compensation value streams Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: As a BI Analyst in the Business Intelligence, Reporting, and Sensing team, you will play a critical role in transforming data into actionable insights that drive strategic decisions. You will collaborate with cross-functional teams to gather requirements, design analytical solutions, and deliver high-quality dashboards and reports. This role blends technical expertise with business acumen and requires strong communication and problem-solving skills. Roles & Responsibilities: Collaborate with System Architects and Product Managers to manage business analysis activities, ensuring alignment with engineering and product goals. Support Design, development, and maintenance activities of interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos). Analyze datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Collaborate with stakeholders across departments to understand data and reporting needs. Translate business requirements into technical specifications and analytical solutions. Work with Data Engineers to ensure data models and pipelines support accurate and reliable reporting. Contribute to data quality and governance initiatives. Document business processes, use cases, and test plans to support development and QA efforts. Participate in Agile ceremonies and contribute to backlog refinement and sprint planning. Basic Qualifications and Experience: Bachelor's or Master’s degree in Computer Science, IT or related field experience Atleast 5 years of experience as Business Analyst or relevant areas. Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Experience with data visualization tools such as Power BI, Tableau, or QuickSight. Proficiency in SQL and scripting languages (e.g., Python) for data processing and analysis Familiarity with data modeling, warehousing, and ETL pipelines Experience writing user stories and acceptance criteria in Agile tools like JIRA Strong analytical and problem-solving skills Good-to-Have Skills: Experience with AWS services (e.g., Redshift, S3, EC2) Understanding of Agile methodologies (Scrum, SAFe) Knowledge of DevOps, CI/CD practices Familiarity with scientific or healthcare data domains Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Developer certification (preferred) SAFe for Teams Certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Shift Information: This position may require working a second or third shift based on business needs. Candidates must be willing and able to work during evening or night shifts if required. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 week ago
3.0 years
6 - 8 Lacs
Hyderābād
On-site
- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling ShipTech is the connective tissue which connects Transportation Service Providers, First Mile, Middle Mile, and Last Mile to facilitate the shipping of billions of packages each year. Our technology solutions power Amazon's complex shipping network, ensuring seamless coordination across the entire delivery chain. We are seeking a Business Intelligence Engineer II to join our ShipTech Program and Product Growth team, focusing on driving data-driven improvements for our ecosystem. This role will be instrumental in building the right data pipeline, analyzing and optimizing the program requests, scan related data, customer experience data, trans performance metrics and product adoption/growth patterns to enable data-driven decision making for our Program and Product teams. Key job responsibilities 1. Analysis of historical data to identify trends and support decision making, including written and verbal presentation of results and recommendations 2. Collaborating with product and software development teams to implement analytics systems and data structures to support large-scale data analysis and delivery of analytical and machine learning models 3. Mining and manipulating data from database tables, simulation results, and log files 4. Identifying data needs and driving data quality improvement projects 5. Understanding the broad range of Amazon’s and ShipTech's data resources, which to use, how, and when 6. Thought leadership on data mining and analysis 7. Helping to automate processes by developing deep-dive tools, metrics, and dashboards to communicate insights to the business teams 8. Collaborating effectively with internal end-users, cross-functional software development teams, and technical support/sustaining engineering teams to solve problems and implement new solutions 9. Develop ETL pipelines to process and analyze cross-network data. A day in the life ShipTech Program and Product Growth team is hiring for a BIE to own generating insights, defining metrics to measure and monitor, building analytical products, automation and self-serve and overall driving business improvements. The role involves combination of data mining, data-analysis, visualization, statistics, scripting, a bit of machine learning and usage of AWS services too. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
3.0 years
0 Lacs
Hyderābād
On-site
DESCRIPTION ShipTech is the connective tissue which connects Transportation Service Providers, First Mile, Middle Mile, and Last Mile to facilitate the shipping of billions of packages each year. Our technology solutions power Amazon's complex shipping network, ensuring seamless coordination across the entire delivery chain. We are seeking a Business Intelligence Engineer II to join our ShipTech Program and Product Growth team, focusing on driving data-driven improvements for our ecosystem. This role will be instrumental in building the right data pipeline, analyzing and optimizing the program requests, scan related data, customer experience data, trans performance metrics and product adoption/growth patterns to enable data-driven decision making for our Program and Product teams. Key job responsibilities 1. Analysis of historical data to identify trends and support decision making, including written and verbal presentation of results and recommendations 2. Collaborating with product and software development teams to implement analytics systems and data structures to support large-scale data analysis and delivery of analytical and machine learning models 3. Mining and manipulating data from database tables, simulation results, and log files 4. Identifying data needs and driving data quality improvement projects 5. Understanding the broad range of Amazon’s and ShipTech's data resources, which to use, how, and when 6. Thought leadership on data mining and analysis 7. Helping to automate processes by developing deep-dive tools, metrics, and dashboards to communicate insights to the business teams 8. Collaborating effectively with internal end-users, cross-functional software development teams, and technical support/sustaining engineering teams to solve problems and implement new solutions 9. Develop ETL pipelines to process and analyze cross-network data. A day in the life ShipTech Program and Product Growth team is hiring for a BIE to own generating insights, defining metrics to measure and monitor, building analytical products, automation and self-serve and overall driving business improvements. The role involves combination of data mining, data-analysis, visualization, statistics, scripting, a bit of machine learning and usage of AWS services too. BASIC QUALIFICATIONS 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Business Intelligence
Posted 1 week ago
0 years
3 - 7 Lacs
Hyderābād
On-site
Job Summary Strong knowledge of AWS services including S3 AWS DMS (Database Migration Service) and AWS Redshift Serverless. Experience in setting up and managing data pipelines using AWS DMS. Proficiency in creating and managing data storage solutions using AWS S3. Proficiency in working with relational databases particularly PostgreSQL Microsoft SQL Server Oracle Experience in setting up and managing data warehouses particularly AWS Redshift Serverless. Responsibilities Analytical and Problem-Solving Skills Ability to analyze and interpret complex data sets. Experience in identifying and resolving data integration issues such as inconsistencies or discrepancies. Strong problem-solving skills to troubleshoot and resolve data integration and migration issues. Soft Skills Ability to work collaboratively with database administrators and other stakeholders to ensure integration solutions meet business requirements. Strong communication skills to document data integration processes including data source definitions data flow diagrams and system interactions. Ability to participate in design reviews and provide input on data integration plans. Willingness to stay updated with the latest data integration tools and technologies and recommend upgrades when necessary. Security and Compliance Knowledge of data security and privacy regulations. Experience in ensuring adherence to data security and privacy standards during data integration processes. Certifications Required AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database - Specialty are a plus
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough