Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Explore, analyze, and visualize our healthcare and insurance data to provide insight to stakeholders Design reports and dashboards to monitor metrics and add value to business Identify pain points in existing processes and suggest improvements backed by data Use existing / build new frameworks to develop maintain ETL processes Build data models to support product growth, when required What we are looking for: You have 4+ years of experience working in analytics and product-driven roles You have advanced SQL skills You ve worked in cross functional teams involving Product, Engineering, Design and Research, and can manage senior stakeholders You re a self-starter who is comfortable working autonomously Analytics Stack Analytics : Python / R / SQL + Excel Database : PostgreSQL, Amazon Redshift, FireStore Warehouse : Amazon Redshift ETL : Lots of Python + custom-made Business Intelligence/Visualisation : Python/R libraries (location data) + Any BI Tool
Posted 2 months ago
2.0 - 5.0 years
7 - 11 Lacs
Tirodi, Mumbai
Work from Office
We are seeking a talented Data Scientist II to join our team. The ideal candidate will have 2-5 years of experience in data science and possess expertise in machine learning, deep learning, Python programming, SQL, Amazon Redshift, NLP, and AWS Cloud. ** Duties and Responsibilities: Develop and implement machine learning models to extract insights from large datasets. Utilize deep learning techniques to enhance data analysis and predictive modeling. Write efficient Python code to manipulate and analyze data. - Work with SQL databases to extract and transform data for analysis. Utilize Amazon Redshift for data warehousing and analytics. Apply NLP techniques to extract valuable information from unstructured data. - Utilize AWS Cloud services for data storage, processing, and analysis. Qualifications and Requirements: Bachelors degree in Computer Science, Statistics, Mathematics, or related field. - 2-5 years of experience in data science or related field. Proficiency in machine learning, deep learning, Python programming, SQL, Amazon Redshift, NLP, and AWS Cloud. Strong analytical and problem-solving skills. - Excellent communication and teamwork abilities. * *Key Competencies - Strong analytical skills. - Problem-solving abilities. - Proficiency in machine learning and deep learning techniques. Excellent programming skills in Python. - Knowledge of SQL and database management. - Familiarity with Amazon Redshift, NLP, and AWS Cloud services. ** Performance Expectations: Develop and deploy advanced machine learning models. Extract valuable insights from complex datasets. Collaborate with cross-functional teams to drive data-driven decision-making. Stay updated on the latest trends and technologies in data science. We are looking for a motivated and skilled Data Scientist I to join our team and contribute to our data-driven initiatives. If you meet the qualifications and are passionate about data science, we encourage you to apply.
Posted 2 months ago
4.0 - 8.0 years
10 - 20 Lacs
Gurugram
Remote
US Shift- 5 working days. Remote Work. (US Airline Group) Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Strong focus on AWS and PySpark. Knowledge of AWS services, including but not limited to S3, Redshift, Athena, EMR, and Glue. Proficiency in PySpark and related Big Data technologies for ETL processing. Strong SQL skills for data manipulation and querying. Familiarity with data warehousing concepts and dimensional modeling. Experience with data governance, data quality, and data security practices. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams.
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
- 5 years
13 - 18 Lacs
Chennai
Work from Office
Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role: Service Desk Manager ? Band C1 – Role (Data Architect) Location – Chennai, Noida Total exp – 11+ Years The candidate must have overall 11+ years of experience in ETL and Data Warehouse of which 3-4 years on Hadoop platform and at least 2 year in Cloud Big Data Environment. Must have hands on experience on Hadoop services like HIVE/ Spark/ Scala /Sqoop Must have hands on in writing complex use case driven SQLs Should have about 3+ years of hands-on good knowledge of AWS Cloud and On-Prem related key services and concepts. Should have 3+ years of working experience with AWS Cloud tools like EMR, Redshift, Glue S3 Should have been involved in On-Prem to Cloud Migration process. Should have good knowledge with HIVE / Spark / Scala scripts Should have good knowledge on Unix Shell scripting Should be flexible to overlap US business hours Should be able to drive technical design on Cloud applications Should be able to guide & drive the team members for cloud implementations Should be well versed with the costing model and best practices of the services to be used for Data Processing Pipelines in Cloud Environment. AWS Certified applicants preferable ? ? ? Competencies Client Centricity Passion for Results Collaborative Working Problem Solving & Decision Making Effective communication Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
5 - 10 years
12 - 16 Lacs
Mumbai
Work from Office
Senior Digital Solutions Consultant - MUM02DM Company Worley Primary Location IND-MM-Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting May 2, 2025 Unposting Date Jun 1, 2025 Reporting Manager Title Director, Data Platform We deliver the worlds most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes Technical and Industry Experience: Independent Integration Developer with over 5+ years of experience in developing and delivering integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark and SQL programming language for data manipulation and pipeline development Hands-on experience with AWS Glue, Airflow, Dynamo DB, Redshift, S3 buckets, Event-Grid, and other AWS services Experience implementing CI/CD pipelines, including data testing practices. Proficient in Swagger, JSON, XML, SOAP and REST based web service development Behaviors Required: Driven by our values and purpose in everything we do. Visible, active, hands on approach to help teams be successful. Strong proactive planning ability. Optimistic, energetic, problem solver, ability to see long term business outcomes. Collaborative, ability to listen, compromise to make progress. Stronger together mindset, with a focus on innovation & creation of tangible / realized value. Challenge status quo. Education Qualifications, Accreditation, Training: Degree in Computer Science and/or related fields AWS Data Engineering certifications desirable Moving forward together Were committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law.
Posted 2 months ago
8 - 13 years
10 - 14 Lacs
Hyderabad, Secunderabad
Work from Office
Digital Solutions Consultant I - HYD014C Company Worley Primary Location IND-AP-Hyderabad Job Digital Solutions Schedule Full-time Employment Type Agency Contractor Job Level Experienced Job Posting Apr 21, 2025 Unposting Date May 21, 2025 Reporting Manager Title Director, Data Platform We deliver the worlds most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Digital Solutions Consultant with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc An AWS Data Modeler is responsible for designing and implementing data models that effectively organize and manage data within Amazon Web Services (AWS) environments. This role involves collaborating with data architects, analysts, and business stakeholders to translate business requirements into scalable and efficient data structures. Data ModelingDevelop conceptual, logical, and physical data models to support various business applications, ensuring alignment with organizational data standards. AWS IntegrationDesign and implement data models optimized for AWS services, including RDS, Redshift, DynamoDB, and S3, to ensure seamless data integration and retrieval. ETL ProcessesCollaborate with ETL developers to design workflows that ensure accurate and efficient data extraction, transformation, and loading, maintaining data quality and consistency. Metadata ManagementAdminister and maintain metadata repositories to ensure data accuracy, consistency, and accessibility. Data Quality AssuranceImplement data validation and cleansing techniques to maintain high data quality standards. DocumentationCreate and maintain comprehensive documentation of data models, data flow diagrams, and related processes to facilitate understanding and maintenance. About You To be considered for this role it is envisaged you will possess the following attributes Educational BackgroundBachelor's or Master's degree in Computer Science, Information Systems, or a related field. Experience8+ years of experience in data modeling, with a focus on designing data structures for finance, commercial, supply chains, procurement, or customer analytics. AWS ProficiencyExtensive experience with AWS services, including RDS, Redshift, DynamoDB, and S3, with relevant AWS certifications being a plus. SQL ExpertiseProficiency in SQL for querying databases, creating tables, and managing data relationships. Data Modeling ToolsExperience with data modeling tools such as ERWin or ER/Studio. ETL KnowledgeUnderstanding of ETL processes and experience with data transformation tools and technologies. Analytical Skills: Strong analytical and problem-solving skills, with the ability to translate complex business requirements into technical specifications. CommunicationExcellent communication skills to effectively collaborate with technical teams and non-technical stakeholders. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. Were building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please noteIf you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley.
Posted 2 months ago
2 - 6 years
10 - 14 Lacs
Hyderabad, Secunderabad
Work from Office
Digital Solutions Consultant I - HYD015A Company Worley Primary Location IND-AP-Hyderabad Job Digital Solutions Schedule Full-time Employment Type Agency Contractor Job Level Experienced Job Posting May 7, 2025 Unposting Date May 20, 2025 Reporting Manager Title Manager We deliver the worlds most complex projects. Work as part of a collaborative and inclusive team . Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia.Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now.We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects.? The Role As a Power BI Developer with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are seeking an experienced Power BI Developer with a strong skillset in creating visually compelling reports and dashboards, data modeling, and UI/UX design. The ideal candidate will have expertise in wireframing, UI design, and front-end development using React and CSS to complement their data analysis and visualization abilities in Power BI. Power BI Report Development: Design, develop, and maintain interactive dashboards and reports in Power BI that provide business insights. Leverage DAX, Power Query, and advanced data modeling techniques to build robust and scalable solutions. Create custom visuals and optimize Power BI performance for large datasets. UI/UX Design: Collaborate with product managers and stakeholders to define UI and UX requirements for data visualization. Design wireframes, prototypes, and interactive elements for Power BI reports and applications. Ensure designs are user-friendly, intuitive, and visually appealing. Data Modeling: Develop and maintain complex data models to support analytical and reporting needs. Ensure the integrity, accuracy, and consistency of data within Power BI reports. Implement ETL processes using Power Query for data transformation React & Front-End Development: Develop interactive front-end components and custom dashboards using React . Integrate React applications with Power BI APIs for seamless, embedded analytics experiences. Utilize CSS and modern front-end techniques to ensure responsive and visually engaging interfaces Collaboration & Problem-Solving: Work closely with cross-functional teams (data analysts, business analysts, project managers) to understand requirements and deliver solutions. Analyze business needs and translate them into effective data solutions and UI designs. Provide guidance and support in the best practices for data visualization, user experience, and data modeling. About You To be considered for this role it is envisaged you will possess the following attributes: Experience with AWS services and Power BI Service for deployment and sharing. Familiarity with other BI tools or frameworks (e.g., Tableau, Qlik, QuickSight). Basic understanding of back-end technologies and databases (e.g., SQL, NoSQL). Knowledge of Agile development methodologies. Bachelors degree in Computer Science, Information Technology, or a related field. Strong experience in Power BI (desktop and service), including Power Query, DAX, and data model design. Proficiency in UI/UX design with experience in creating wireframes, mockups, and interactive prototypes. Expertise in React for building interactive front-end applications and dashboards. Advanced knowledge of CSS for styling and creating visually responsive components. Strong understanding of data visualization best practices, including the ability to create meaningful and impactful reports. Experience working with large datasets and optimizing Power BI performance. Familiarity with Power BI APIs and embedding Power BI reports into web applications. Excellent communication and collaboration skills to work effectively in a team environment. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. Were building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Please noteIf you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley.
Posted 2 months ago
3 - 5 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Role Purpose The purpose of the role is to resolve, maintain and manage client’s software/ hardware/ network based on the service requests raised from the end-user as per the defined SLA’s ensuring client satisfaction ? Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of client’s network/ server/ system/ storage/ platform/ infrastructure and other equipment’s to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner ? Deliver NoPerformance ParameterMeasure1. 100% adherence to SLA/ timelines Multiple cases of red time Zero customer escalation Client appreciation emails ? ? Mandatory Skills: Amazon Redshift. Experience3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
4 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
Data Engineer | 4 to 6 years | Bengaluru Job description 4+ years of microservices development experience in two of thesePython, Java, Scala 4+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 4+ years of experience with Big Data TechnologiesApache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational DatabasesPostgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 2+ years of experience in Cloud technologiesAWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Primary Skills: Python, Java, Scala, data pipelines, Apache Spark, Hadoop, or Kafka , Postgres, MySQL, NoSQL Secondary Skills: Snowflake , Redshift ,Relation Data Modelling, Dimensional Data Modeling Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 2 months ago
3 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The primary focus is to help organizations design, develop, and optimize their data infrastructure and systems. They help organizations enhance data processes, and leverage data effectively to drive business outcomes. Skills (competencies) Industry Standard Data Modeling (FSLDM) Ab Initio Industry Standard Data Modeling (IBM FSDM)) Agile (Software Development Framework) Influencing Apache Hadoop Informatica IICS AWS Airflow Inmon methodology AWS Athena JavaScript AWS Code Pipeline Jenkins AWS EFS Kimball AWS EMR Linux - Redhat AWS Redshift Negotiation AWS S3 Netezza Azure ADLS Gen2 NewSQL Azure Data Factory Oracle Exadata Azure Data Lake Storage Performance Tuning Azure Databricks Perl Azure Event Hub Platform Update Management Azure Stream Analytics Project Management Azure Sunapse PySpark Bitbucket Python Change Management R Client Centricity RDD Optimization Collaboration SantOs Continuous Integration and Continuous Delivery (CI/CD) SaS Data Architecture Patterns Scala Spark Data Format Analysis Shell Script Data Governance Snowflake Data Modeling SPARK Data Validation SPARK Code Optimization Data Vault Modeling SQL Database Schema Design Stakeholder Management Decision-Making Sun Solaris DevOps Synapse Dimensional Modeling Talend GCP Big Table Teradata GCP BigQuery Time Management GCP Cloud Storage Ubuntu GCP DataFlow Vendor Management GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2
Posted 2 months ago
6 - 10 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.
Posted 2 months ago
3 - 5 years
6 - 10 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer We are looking for a skilled Data Architect/Engineer with strong expertise in AWS and data lake solutions. If you"™re passionate about building scalable data platforms, this role is for you. Your responsibilities will include: Architect & Design Build scalable and efficient data solutions using AWS services like Glue, Redshift, S3, Kinesis (Apache Kafka), DynamoDB, Lambda, Glue Streaming ETL, and EMR. Real-Time Data Integration Integrate real-time data from multiple Siemens orgs into our central data lake. Data Lake Management Design and manage large-scale data lakes using S3, Glue, and Lake Formation. Data Transformation Apply transformations to ensure high-quality, analysis-ready data. Snowflake Integration Build and manage pipelines for Snowflake, using Iceberg tables for best performance and flexibility. Performance Tuning Optimize pipelines for speed, scalability, and cost-effectiveness. Security & Compliance Ensure all data solutions meet security standards and compliance guidelines. Team Collaboration Work closely with data engineers, scientists, and app developers to deliver full-stack data solutions. Monitoring & Troubleshooting Set up monitoring tools and quickly resolve pipeline issues when needed. You"™d describe yourself as: Experience 3+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Solid understanding of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers
Posted 2 months ago
3 - 5 years
11 - 15 Lacs
Hyderabad
Work from Office
Overview As Senior Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data str/cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 2 months ago
5 - 10 years
9 - 13 Lacs
Hyderabad
Work from Office
Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Be a founding member of the data engineering team. Help to attract talent to the team by networking with your peers, by representing PepsiCo HBS at conferences and other events, and by discussing our values and best practices when interviewing candidates. Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Ensure that we build high quality software by reviewing peer code check-ins. Define best practices for product development, engineering, and coding as part of a world class engineering team. Collaborate in architecture discussions and architectural decision making that is part of continually improving and expanding these platforms. Lead feature development in collaboration with other engineers; validate requirements / stories, assess current system capabilities, and decompose feature requirements into engineering tasks. Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with other engineers. Develop software in short iterations to quickly add business value. Introduce new tools / practices to improve data and code quality; this includes researching / sourcing 3rd party tools and libraries, as well as developing tools in-house to improve workflow and quality for all data engineers. Support data pipelines developed by your teamthrough good exception handling, monitoring, and when needed by debugging production issues. Qualifications 6-9 years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience in SQL optimization and performance tuning Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with data profiling and data quality tools like Apache Griffin, Deequ, or Great Expectations. Current skills in following technologies Python Orchestration platformsAirflow, Luigi, Databricks, or similar Relational databasesPostgres, MySQL, or equivalents MPP data systemsSnowflake, Redshift, Synapse, or similar Cloud platformsAWS, Azure, or similar Version control (e.g., GitHub) and familiarity with deployment, CI/CD tools. Fluent with Agile processes and tools such as Jira or Pivotal Tracker Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus.
Posted 2 months ago
2 - 5 years
3 - 7 Lacs
Gurugram
Work from Office
Role Data Engineer Skills: Data Modeling:* Design and implement efficient data models, ensuring data accuracy and optimal performance. ETL Development:* Develop, maintain, and optimize ETL processes to extract, transform, and load data from various sources into our data warehouse. SQL Expertise:* Write complex SQL queries to extract, manipulate, and analyze data as needed. Python Development:* Develop and maintain Python scripts and applications to support data processing and automation. AWS Expertise:* Leverage your deep knowledge of AWS services, such as S3, Redshift, Glue, EMR, and Athena, to build and maintain data pipelines and infrastructure. Infrastructure as Code (IaC):* Experience with tools like Terraform or CloudFormation to automate the provisioning and management of AWS resources is a plus. Big Data Processing:* Knowledge of PySpark for big data processing and analysis is desirable. Source Code Management:* Utilize Git and GitHub for version control and collaboration on data engineering projects. Performance Optimization:* Identify and implement optimizations for data processing pipelines to enhance efficiency and reduce costs. Data Quality:* Implement data quality checks and validation procedures to maintain data integrity. Collaboration:* Work closely with data scientists, analysts, and other teams to understand data requirements and deliver high-quality data solutions. Documentation:* Maintain comprehensive documentation for all data engineering processes and projects.
Posted 2 months ago
6 - 10 years
15 - 25 Lacs
Hyderabad
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Within our Database Administration team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Your expertise will be crucial in configuring, installing and maintaining database management systems, ensuring that our systems are always running at peak performance. You'll also be responsible for managing user access, implementing the highest standards of security to protect our valuable data from unauthorized access. In addition, you'll be a disaster recovery guru, developing strong backup and recovery plans to ensure that our system is always protected in the event of a failure. Your technical acumen will be put to use, as you support end users and application developers in solving complex problems related to our database systems. As a key player on the team, you'll implement policies and procedures to safeguard our data from external threats. You will also conduct capacity planning and growth projections based on usage, ensuring that our system is always scalable to meet our business needs. You'll be a strategic partner, working closely with various teams to coordinate systematic database project plans that align with our organizational goals. Your contributions will not go unnoticed - you'll have the opportunity to propose and implement enhancements that will improve the performance and reliability of the system, enabling us to deliver world-class services to our customers. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career, from Junior Administrator to Architect. We have training and upskilling programs that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. One of the benefits of Kyndryl is that we work with customers in a variety of industries, from banking to retail. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Having 6+ years of experience as a SQL and AWS Engineer. Develop and maintain SQL queries and scripts for database management, monitoring, and optimization. Design, implement, and manage database solutions using AWS services such as Amazon RDS, Amazon Aurora, and Amazon Redshift. Work closely with development, QA, and operations teams to ensure smooth and reliable database operations. Implement and manage monitoring and logging solutions to ensure database health and performance. Use tools like AWS CloudFormation, Terraform, or Ansible to manage database infrastructure. Ensure the security of databases and applications by implementing best practices and conducting regular audits. Identify and resolve issues related to database performance, deployment, and infrastructure. Preferred Technical and Professional Experience: Proficiency in AWS cloud platform, SQL database management, and scripting languages (e.g., Python, Bash). Experience with Infrastructure as Code (IaC) Terraform and configuration management tools (e.g., Ansible, Puppet). Strong analytical and problem-solving skills, particularly in optimizing SQL queries and database performance. Excellent communication and collaboration skills. Relevant certifications in AWS cloud technologies or SQL database management. Previous experience in a SQL and AWS engineering role or related field. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 months ago
10 - 20 years
37 - 50 Lacs
Pune, Bangalore Rural, Gurugram
Hybrid
Job Summary: We are looking for an experienced and dynamic AWS Data Architect/Lead Data Engineer to lead the design and implementation of data solutions in the cloud. This role will focus on leveraging AWS technologies to create scalable, reliable, and optimized data architectures that drive business insights and data-driven decision-making. As an AWS Data Architect, you will play a pivotal role in shaping the data strategy, implementing best practices, and ensuring the seamless integration of AWS-based data platforms, with a focus on services like Amazon Redshift, Aurora, and other AWS data services
Posted 2 months ago
4 - 8 years
25 - 30 Lacs
Pune
Hybrid
So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 4-7 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 2 months ago
5 - 8 years
5 - 15 Lacs
Pune, Chennai
Work from Office
• SQL: 2-4 years of experience • Spark: 1-2 years of experience • NoSQL Databases: 1-2 years of experience • Database Architecture: 2-3 years of experience • Cloud Architecture: 1-2 years of experience • Experience in programming language like Python • Good Understanding of ETL (Extract, Transform, Load) concepts • Good analytical and problem-solving skills • Inclination for learning & be self-motivated. • Knowledge of ticketing tool like JIRA/SNOW. • Good communication skills to interact with Customers on issues & requirements. Good to Have: • Knowledge/Experience in Scala.
Posted 2 months ago
5 - 8 years
15 - 20 Lacs
Pune, Chennai
Hybrid
Role & responsibilities Skill: 1. Spark 2. SQL 3. Python Must Have: • Relevant experience of 5-8yrs as a Data Engineer. • Preferred experience in related technologies as follows: • SQL: 2-4 years of experience Spark: 1-2 years of experience • NoSQL Databases: 1-2 years of experience • Database Architecture: 2-3 years of experience • Cloud Architecture: 1-2 years of experience Experience in programming language like Python Good Understanding of ETL (Extract, Transform, Load) concepts Good analytical and problem-solving skills • Preferred candidate profile
Posted 2 months ago
2 - 5 years
12 - 16 Lacs
Bengaluru
Work from Office
locationsTower 02, Manyata Embassy Business Park, Racenahali & Nagawara Villages. Outer Ring Rd, Bangalore 540065 time typeFull time posted onPosted 4 Days Ago job requisition idR0000389865 About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Pyramid Overview Our Product Engineering teams fuel Targets business with cutting-edge technology to deliver incredible experiences and value for guests and team members. Using a responsive architecture platform, we build and deploy industry-leading technology enabling Target to operate efficiently, securely, and reliably from the inside out. We work across Target, developing comprehensive product strategies, leveraging enterprise and guest feedback to set the standard for best in retail. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. Position Overview Demonstrates broad and deep expertise in Java/Kotlin and frameworks. Designs, develops, and approves end-to-end functionality of a product line, platform, or infrastructure. Communicates and coordinates with project team, partners, and stakeholders. Demonstrates expertise in analysis and optimization of systems capacity, performance, and operational health. Maintains deep technical knowledge within areas of expertise. Stays current with new and evolving technologies via formal training and self-directed education. Experience integrating with third party and opensource frameworks. About You 4 year degree or equivalent experience Experience4 -7 years Programming experience with Java - Springboot & Kotlin - micronaut Strong problem-solving skills with a good understanding of data structures and algorithms. Must have exposure to non-relational databases like MongoDB. Must have exposure to distributed systems and microservice architecture. Good to Have exposure to Data Pipeline, ML Ops, Spark, Python Demonstrates a solid understanding of the impact of own work on the team and/or guests Writes and organizes code using multiple computer languages, including distributed programming and understand different frameworks and paradigm Delivers high-performance, scalable, repeatable, and secure deliverables with broad impact (high throughput and low latency) Influences and applies data standards, policies, and procedures Maintains technical knowledge within areas of expertise Stays current with new and evolving technologies via formal training and self-directed education. Know More About Us Here: Life at Target- Benefits- Follow us on social media Target Tech- https://tech.target.com/
Posted 2 months ago
4 - 9 years
9 - 13 Lacs
Chennai
Work from Office
KPMG India is looking for Senior - SAP-PC-MM to join our dynamic team and embark on a rewarding career journey. ### **Job Description Senior SAP-PC-MM** #### **Responsibilities:** - Lead the implementation, configuration, and support of SAP Product Costing (PC) and Materials Management (MM) modules to optimize business operations - Analyze business requirements and translate them into SAP solutions, ensuring alignment with industry best practices and organizational goals - Design, develop, and maintain SAP configurations for PC-MM, including cost component structures, procurement processes, material valuation, and inventory management - Collaborate with cross-functional teams, including finance, procurement, and supply chain, to streamline business processes and improve system efficiencies - Troubleshoot and resolve complex issues related to SAP PC-MM, ensuring system stability, data accuracy, and seamless integration with other SAP modules - Drive process improvements by identifying automation opportunities, reducing manual efforts, and enhancing system functionalities - Provide end-user training and documentation to ensure efficient adoption of SAP solutions - Support data migration, system upgrades, and testing activities to maintain SAP system integrity - Work with external consultants and vendors to implement enhancements and resolve system gaps - Ensure compliance with organizational policies, regulatory requirements, and audit guidelines within SAP PC-MM processes
Posted 2 months ago
3 - 5 years
16 - 18 Lacs
Bengaluru
Work from Office
KPMG India is looking for AWS Data Engineer - Consultant to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 2 months ago
3 - 5 years
4 - 8 Lacs
Hyderabad
Work from Office
Sr Associate Software Engineer – Finance What you will do The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Preferred Qualifications: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France