Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 - 20.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : OpenText ECM Tools Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and guidance to your team members while continuously seeking opportunities for improvement in application functionality and user experience. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in OpenText ECM Tools.- Good To Have Skills: Experience with other enterprise content management systems.- Strong understanding of application development methodologies.- Experience in integrating applications with third-party services.- Familiarity with database management and data modeling techniques. Additional Information:- The candidate should have minimum 7.5 years of experience in OpenText ECM Tools.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
15.0 - 20.0 years
10 - 14 Lacs
Mumbai
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : .Net Full Stack Development Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and efficiency in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: - Must To Have Skills: Proficiency in .Net Full Stack Development.- Strong understanding of web development frameworks and technologies.- Experience with database management systems and data modeling.- Familiarity with cloud services and deployment strategies.- Ability to implement best practices in software development and design patterns. Additional Information:- The candidate should have minimum 5 years of experience in .Net Full Stack Development.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data is accessible, reliable, and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture and data models.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and ETL processes.- Experience with data quality frameworks and data governance practices.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
7.0 - 12.0 years
25 - 35 Lacs
Vadodara
Remote
Job Description We are looking for a highly skilled SAP MDG Consultant with extensive experience in SAP S/4HANA and SAP RISE environments. The candidate will lead and support strategic master data initiatives across multiple domains, including Customer, Vendor, Material, and Finance. The ideal candidate will possess hands-on expertise in SAP MDG configuration, data modeling, process modeling, and system integration, along with strong communication and stakeholder management skills. Skill / Qualifications Bachelors or Masters degree in a related field. 5+ experience in SAP MDG (Master Data Governance) – configuration, implementation, and customization. Deep understanding of SAP S/4HANA data architecture and integration methods. Hands-on experience delivering SAP MDG projects as part of SAP RISE transformation initiatives. Strong knowledge of data modeling (Entity Types, Relationships, Attributes). Proficiency in process modeling using BRF+ and rule-based workflows. Experience with DRF (Data Replication Framework) and integration. Familiarity with SAP Fiori-based MDG UIs. Hands-on expertise with debugging, enhancements (BADI/BAPI), and data validation rules. Excellent verbal and written communication skills. Ability to work independently in a fully remote environment. Strong problem-solving mindset and keen attention to detail. Proactive, accountable, and collaborative working style. Preferred Skills Exposure to SAP MDG Consolidation & Mass Processing. Knowledge of SAP MDG-F (Finance Master Data). Experience with SAP BTP or SAP CPI integration. Prior experience in global rollouts or greenfield implementations. Certification Required SAP Certification Job Responsibilities Configure and customize SAP MDG for multiple domains, including Business Partner, Customer, Vendor, Material, and Finance. Align SAP MDG functionality with S/4HANA transformation strategies in SAP RISE environments. Design, implement, and optimize data models, workflows, and validation rules. Integrate SAP MDG with SAP ECC/S/4HANA and third-party systems. Collaborate with functional and business teams to define governance rules and approval workflows. Lead data quality initiatives including data cleansing, standardization, and enrichment. Troubleshoot and resolve SAP MDG-related issues and manage technical enhancements. Provide best practice recommendations for data governance frameworks. Support data migration activities, mass data maintenance, and User Acceptance Testing (UAT). Create and maintain detailed documentation, including technical specifications, configurations, and functional designs. Benefits Competitive Market Rate (Depending on Experience)
Posted 1 week ago
7.0 - 12.0 years
35 - 50 Lacs
Hyderabad, Chennai
Hybrid
Roles and Responsibilities Design and implement data solutions using Data Architecture principles, including Data Models, Data Warehouses, and Data Lakes. Develop cloud-based data pipelines on AWS/GCP platforms to integrate various data sources into a centralized repository. Ensure effective Data Governance through implementation of policies, procedures, and standards for data management. Collaborate with cross-functional teams to identify business requirements and develop technical roadmaps for data engineering projects. Desired Candidate Profile 7-12 years of experience in Solution Architecting with expertise in Data Architecture, Data Modeling, Data Warehousing, Data Integration, Data Lake, Data Governance, Data Engineering, and Data Architecture Principles. Strong understanding of AWS/GCP Cloud Platforms and their applications in building scalable data architectures. Experience working with large datasets from multiple sources; ability to design efficient ETL processes for migration into target systems.
Posted 1 week ago
4.0 - 6.0 years
20 - 25 Lacs
Noida, Pune, Chennai
Work from Office
We are seeking a skilled and detail-oriented Data Engineer with 4 to 6 years of hands-on experience in Microsoft Fabric , Snowflake , and Matillion . The ideal candidate will play a key role in supporting MS Fabric and migrating from MS fabric to Snowflake and Matillion. Roles and Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Matillion and integrate data from various sources. Architect and optimize Snowflake data warehouses, ensuring efficient data storage, querying, and performance tuning. Leverage Microsoft Fabric for end-to-end data engineering tasks, including data ingestion, transformation, and reporting. Collaborate with data analysts, scientists, and business stakeholders to deliver high-quality, consumable data products. Implement data quality checks, monitoring, and observability across pipelines. Automate data workflows and support CI/CD practices for data deployments. Troubleshoot performance bottlenecks and data pipeline failures with a root-cause analysis mindset. Maintain thorough documentation of data processes, pipelines, and architecture. trong expertise with: Microsoft Fabric (Dataflows, Pipelines, Lakehouse, Notebooks, etc.) Snowflake (warehouse sizing, SnowSQL, performance tuning) Matillion (ETL/ELT orchestration, job optimization, connectors) Proficiency in SQL and data modeling (dimensional/star schema, normalization). Experience with Python or other scripting languages for data manipulation. Familiarity with version control tools (e.g., Git) and CI/CD workflows. Solid understanding of cloud data architecture (Azure preferred). Strong problem-solving and debugging skills.
Posted 1 week ago
3.0 - 5.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for an experienced Data Engineer with strong expertise in Databricks and Azure Data Factory (ADF) to design, build, and manage scalable data pipelines and integration solutions. The ideal candidate will have a solid background in big data technologies, cloud platforms, and data processing frameworks to support enterprise-level data transformation and analytics initiatives. Roles and Responsibilities Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks . Build and optimize data flows and transformations for structured and unstructured data. Develop scalable ETL/ELT processes to extract data from various sources including SQL, APIs, and flat files. Implement data quality checks, error handling, and performance tuning of data pipelines. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Work with Azure services such as Azure Data Lake Storage (ADLS) , Azure Synapse Analytics , and Azure SQL . Participate in code reviews, version control, and CI/CD processes. Ensure data security, privacy, and compliance with governance standards. Strong hands-on experience with Azure Data Factory and Azure Databricks (Spark-based development). Proficiency in Python , SQL , and PySpark for data manipulation. Experience with Delta Lake , data versioning , and streaming/batch data processing . Working knowledge of Azure services such as ADLS, Azure Blob Storage, and Azure Key Vault. Familiarity with DevOps , Git , and CI/CD pipelines in data engineering workflows. Strong understanding of data modeling, data warehousing, and performance tuning. Excellent analytical, communication, and problem-solving skills.
Posted 1 week ago
4.0 - 8.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Transport is at the core of modern society Imagine using your expertise to shape sustainable transport and infrastructure solutions for the futureIf you seek to make a difference on a global scale, working with next-gen technologies and the sharpest collaborative teams, then we could be a perfect match What You Will Do You will work in agile teams through good collaboration with our colleagues in Technology development all around the world You will lead the team and be responsible for understanding customer usage and performance of our products as well as understand needs of our internal data consumers for providing solutions throughout entire product life cycle, from idea investigation and concept evaluation to industrialization and to aftermarket and maintenance You will extract insights, make meaningful interpretations, recommendations and eventually predictions from the Data available from various sources to support our endeavour in moving towards Data Driven Powertrain Development You will work on all aspects of Data by combining techniques in Data Engineering, Data Analytics and Data Science You will have the opportunity to analyse and solve real world problems with state-of-the-art techniques like Advanced Analytics, Natural Language Processing (NLP), Machine Learning, and Generative AI on the base of strong statistical knowledge You get the opportunity to follow your Data Driven Models from script to test cell to verification in a truck and eventually to being used by our end customers We have an agile way of working, where each team plan their activities in sprints and deliver solutions together as a team We strive to have an open and honest environment within the teams, where it is easy to ask each other for support when needed The tasks can be either part of a larger project or short tasks to improve products currently in production You will get the opportunity to interact with highly committed colleagues from different cultures You will keep strong communication with project managers, software developers, calibration, simulation and test engineers We hope you will learn as much from us as we will from you Who are you We believe that to be successful in this position, you are a team player, have strong experience in Statistics, Data analytics, machine learning, generative AI and a will to deliver with effective communication techniques (Data story telling) You have a Passion and have demonstrated success in Leading a Team competent in solving real world problems with Data You must have a proven experience in Data modelling Regression, Clustering, Neural Networks, Time series etc and should have used them in solving real-life challenges (prediction, automation, real time optimization etc) You have a strong competence in Presenting Insights gained from Data Analytics through Visualization tools like Power BI You have a willingness to learn and take more responsibility with can-do attitude You will be greatly appreciated in this role if you have demonstrated Predictive analysis and decision-making using Data If you are a Masters Degree holder in Mechanical / Automobile / Electronics / Mechatronics Engineering with fantastic analytical skills, have gained a strong domain understanding in Powertrain Engineering (or Automotive Industry) with proven skills in handling and analysing large set of data to make meaningful interpretations and if you believe that you can work smoothly with Python ( including libraries like NumPy, SciPy, Pandas, TensorFlow) , R, SQL, Git, Azure, Hadoop then you can be a good fit into this role You also have strong experience in applying machine learning methods like supervised, unsupervised and reinforcement learning Experience of working with relational databases, data privacy and understanding of IOT based instrumentation design with additional data logging to build or validate models is a big plus A passion for turning data into knowledge with great visualizations and an experience of working with plant/ component models, and integration of these models into SIL/MIL/HIL evaluations would be an icing on the cake Whats in it for you At Volvo Group Trucks Technology Powertrain Engineering, we have a big emphasis on how Data should Drive our Future Developments in Technology and Engineering We are currently looking for a Senior Data Scientist who can helps us to accelerate in our journey Do you want to be part of a team where you get the chance to contribute to a better environment and sustainable future through Data Driven Decision makingGood, we are ready for you! Ready for the next move If you are curious to explore how we put our words into actions, follow us on LinkedIn and volvogroup, Are you excited to bring your skills and disruptive ideas to the tableWe cant wait to hear from you Apply today! We value your data privacy and therefore do not accept applications via mail Who We Are And What We Believe In We are committed to shaping the future landscape of efficient, safe, and sustainable transport solutions Fulfilling our mission creates countless career opportunities for talents across the groups leading brands and entities Applying to this job offers you the opportunity to join Volvo Group Every day, you will be working with some of the sharpest and most creative brains in our field to be able to leave our society in better shape for the next generation We are passionate about what we do, and we thrive on teamwork We are almost 100,000 people united around the world by a culture of care, inclusiveness, and empowerment Group Trucks Technology are seeking talents to help design sustainable transportation solutions for the future As part of our team, youll help us by engineering exciting next-gen technologies and contribute to projects that determine new, sustainable solutions Bring your love of developing systems, working collaboratively, and your advanced skills to a place where you can make an impact Join our design shift that leaves society in good shape for the next generation
Posted 1 week ago
2.0 - 5.0 years
6 - 10 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
At TigerData, formerly Timescale, we empower developers and businesses with the fastest PostgreSQL platform designed for transactional, analytical, and agentic workloads Trusted globally by thousands of organizations, TigerData accelerates real-time insights, drives intelligent applications, and powers critical infrastructure at scale As a globally distributed, remote-first team committed to direct communication, accountability, and collaborative excellence, we're shaping the future of data infrastructure, built for speed, flexibility, and simplicity TigerData is hiring a Technical Support Engineer for our growing, international team! The right candidate will be comfortable providing deep technical support and interacting with customers on a day-to-day basis You should have deep experience with relational databases (PostgreSQL a strong plus!) and must be able to quickly gain a detailed understanding of TimescaleDB You should have a can do it attitude, where helping the user comes first TigerData is a remote-first organization This is a permanent, weekend role This role will be 4-day 10-hour shifts, 7am 5pm IST Friday-Monday What You Will Do: Work with our customers across a wide range of topics: from basic administration of TimescaleDB to deep consultative conversations around design, optimization, and implementation Manage support cases from beginning to end Develop and maintain close relationships with our customers to help them be successful Be curious, always willing to learn, and always looking for ways to improve our products, our processes, and our culture What You Need: Bachelors degree in computer science, information systems, or equivalent experience Relational DB experience at scale: administration, data modeling, query optimization Experience with relational databases (PostgreSQL a strong plus!) Experience as a technical support professional with external customers Strong aptitude to understand and learn new software products Strong communication skills and the ability to express patience and empathy when troubleshooting complex technical concepts to a varied audience Other Things that Will Help: Be familiar with TimescaleDB! You should download and test it out before you apply Familiarity with broad cloud computing concepts and experience with AWS, GCP, and Azure Experience with Relational databases, NoSQL databases, and data modeling tools Familiarity with PostgreSQL, including best practices, Query Plan analysis, backup/restore, migrations, etc A strong sense of curiosity and a willingness to wade into the unknown and learn as you go Compensation: $50,000 USD per year Our Commitment: We respond to every applicant We review applications fairly and objectively, and shortlist based on relevant skills and experience We ensure clear and timely communication throughout your candidate journey We maintain a rigorous interview process with a high bar, designed to give you the opportunity to meet various team members you'll collaborate with across our organization About TigerData TigerData, formerly Timescale, sets the standard as the fastest PostgreSQL platform for modern workloads Trusted by more than 2,000 customers across 25+ countries and powering over 3 million active databases, we enable developers and organizations to build real-time, intelligent applications at scale Backed by $180 million from top-tier investors, TigerData is building the new standard for data infrastructure, built on PostgreSQL, designed for the future Want to get a feel for how we work and what we valueCheck out our blog post: What It Takes to Thrive at TigerData We embrace diversity, curiosity, and collaboration Whether debating the perfect chicken nugget crunch , sharing workout routines , or discussing your favorite plants and pets , you'll find your community here Our Tech Stack: We don't require previous experience with our tech stack, but enthusiasm for learning is key Our technologies include PostgreSQL, Tiger Cloud, AWS, Go, Docker, Kubernetes, Python, and innovative features like Hypertables, Hypercore, vector search, and real-time analytics What We Offer: Flexible PTO and comprehensive family leave Fridays off in August Fully remote opportunities globally Stock options for long-term growth Monthly WiFi stipend Professional development and educational resources Premium insurance options for you and your family (US-based employees) Ready to join the future of PostgreSQLWe cant wait to meet you
Posted 1 week ago
8.0 - 12.0 years
6 - 9 Lacs
Hyderabad
Work from Office
Career Category Engineering Job Description ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master s / Bachelor s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. .
Posted 1 week ago
2.0 - 9.0 years
4 - 11 Lacs
Hyderabad
Work from Office
Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, build, and support data ingestion, transformation, and delivery pipelines across structured and unstructured sources within the enterprise data engineering. Manage and monitor day-to-day operations of the data engineering environment, ensuring high availability, performance, and data integrity. Collaborate with data architects, data governance, platform engineering, and business teams to support data integration use cases across R&D, Clinical, Regulatory, and Commercial functions. Integrate data from laboratory systems, clinical platforms, regulatory systems, and third-party data sources into enterprise data repositories. Implement and maintain metadata capture, data lineage, and data quality checks across pipelines to meet governance and compliance requirements. Support real-time and batch data flows using technologies such as Databricks, Kafka, Delta Lake, or similar. Work within GxP-aligned environments, ensuring compliance with data privacy, audit, and quality control standards. Partner with data stewards and business analysts to support self-service data access, reporting, and analytics enablement. Maintain operational documentation, runbooks, and process automation scripts for continuous improvement of data fabric operations. Participate in incident resolution and root cause analysis, ensuring timely and effective remediation of data pipeline issues. Create documentation, playbooks, and best practices for metadata ingestion, data lineage, and catalog usage. Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must have Skills: Build and maintain data pipelines to ingest and update metadata into enterprise data catalog platforms in biotech or life sciences or pharma. Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. experience in data engineering, data operations, or related roles, with at least 2+ years in life sciences, biotech, or pharmaceutical environments. Experience with cloud platforms (e.g., AWS, Azure, or GCP) for data pipeline and storage solutions. Understanding of data governance frameworks, metadata management, and data lineage tracking. Strong problem-solving skills, attention to detail, and ability to manage multiple priorities in a dynamic environment. Effective communication and collaboration skills to work across technical and business stakeholders. Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Preferred Qualifications: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Basic Qualifications: Master s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience Diploma and 7 to 9 years of Computer Science, IT or related field experience Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills : Excellent verbal and written communication skills. High degree of professionalism and interpersonal skills. Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. .
Posted 1 week ago
8.0 - 17.0 years
13 - 17 Lacs
Hyderabad
Work from Office
Career Category Engineering Job Description [Role Name : IS Architecture] Job Posting Title: Data Architect Workday Job Profile : Principal IS Architect Department Name: Digital, Technology Innovation Role GCF: 06A ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 8 years of experience in Computer Science, IT or related field OR Master s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .
Posted 1 week ago
8.0 - 12.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Career Category Engineering Job Description ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world s toughest diseases, and make people s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master s / Bachelor s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT .
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Number of Openings 1 ECMS ID in sourcing stage 533463 Assignment Duration 6 Months Total Yrs. of Experience 7+Years Relevant Yrs. of experience 7+Years Detailed JD (Roles and Responsibilities) ABAP Development: Develop and enhance custom programs using ABAP and ABAP OO. Debug and optimize ABAP code for performance improvements. Strong understanding of integration technologies (ALE/IDoc, BAPI, OData/REST APIs), Implement and maintain user exits, BAdIs, and enhancements. ABAP knowledge for customizations and enhancements. SAP BW on HANA: Develop BW Data Models, InfoObjects, ADSOs, CompositeProviders, and Open ODS views. Manage extraction, transformation, and loading (ETL) processes using SAP BW. Proficiency in MDG configuration (data modeling, BRF+, workflows, and UI modeling). Design and implement BW queries using BEx Analyzer or Analysis for Office. SAP HANA Modeling: Create calculation views, attribute views, and analytical views in SAP HANA. Optimize HANA models for performance, including SQL scripting and debugging. Integrate SAP HANA with SAP BW and other reporting tools. Technical Integration: Collaborate with cross-functional teams to understand business requirements. Work with SAP modules (SD, MM, FI, etc. ) for end-to-end integration. Develop SAP HANA interfaces and support data migration activities. Testing and Support: Perform unit testing, system integration testing, and performance testing. Provide post-implementation support and resolve issues promptly. Required Skills and Qualifications: 6+ years of hands-on experience in SAP ABAP, SAP BW, and SAP HANA. Strong knowledge of SAP BW architecture, modeling, and performance tuning. Expertise in SAP HANA modeling, SQL, and procedures. Proficiency in MDG configuration (data modeling, BRF+, workflows, and UI modeling). Hands-on experience with S/4HANA architecture and data migration tools like SAP Migration Cockpit, LSMW, and ETL tools (e. g. , SAP BODS). Experience with ABAP development in SAP ECC and S/4HANA environments. Expertise in data validation, cleansing, and quality management processes. Familiarity with SAP BusinessObjects, SAC (SAP Analytics Cloud), or other reporting tools. Excellent problem-solving and communication skills. Preferred Skills: Knowledge of SAP Fiori/UI5 development. Experience with CDS views, AMDP, and HANA XSA. Exposure to Agile/Scrum methodologies. SAP certification in BW/4HANA or HANA is a plus. Mandatory skills SAP ABAP Desired/ Secondary skills SAPBW Domain RCLSAP Max Vendor Rate in Per Day (Currency in relevance to work location) 10000 INR PERDAY Delivery Anchor for tracking the sourcing statistics, technical evaluation, interviews, and feedback etc. Krishnakumar_G Work Location given in ECMS ID Hyderabad WFO/WFH/Hybrid WFO Hybrid BG Check (Before OR After onboarding) Before Is there any working in shifts from standard Daylight (to avoid confusions post onboarding) YES/ NO Yes
Posted 1 week ago
8.0 - 9.0 years
15 - 17 Lacs
Hyderabad
Work from Office
A Day in the Life As a recognized expert and key contributor, you will be responsible for translating prioritized new business models into process and SAP capabilities. You will work cross-functionally with business leaders, stakeholders, and IT to deliver impactful solutions that improve products, processes, and services. This role involves managing complex projects, driving business agendas, and providing consultancy and IT support for new initiatives, acquisitions, and product launches. Responsibilities may include the following and other duties may be assigned. Engage with business leaders to understand strategies and identify data-driven changes that improve efficiencies and add value. Work with data sets to define use cases that enhance products, processes, and services. Collaborate with Business Relationship Managers (BRM), business stakeholders, and IT Product Owners / Managers to define business and system requirements. Translate prioritized new business models (e. g. , bundles, subscriptions, value contracts) into process and SAP capabilities. Lead the rollout of new business models, including SAP system configuration, fixes, and enhancements. Provide consultancy and IT support for new acquisitions, product launches, and reshaping of existing offers. Manage large, moderately complex projects or processes from design to implementation, ensuring alignment with business strategy. Develop solutions to moderately complex business problems through data analysis, investigation, and process improvement. Serve as a primary contact for specific projects, negotiating and influencing decision-making with internal and external partners. Provide guidance, coaching, and mentoring to colleagues, delegating work and reviewing outputs where appropriate. Required Knowledge and Experience: 7+ years of IT experience with a Bachelors Degree in Engineering, MCA, or MSc. Experience in SAP SD and global implementations Demonstrates deep expertise in business process analysis, combined with hands-on proficiency configuring and leveraging SAP systems across Sales & Distribution , Pricing and other modules, supported by solid project management practices Strong analytical, problem-solving, and communication skills with the ability to translate business needs into technical solutions. Detail-oriented with a track record of driving cross-functional collaboration and delivering customer service excellence. Familiarity with data analysis tools, data modeling methodologies, and system configuration within SAP. Other Attributes: Recognized expert with significant autonomy in determining deliverables. Contributes to defining direction for new products, processes, and standards with high impact on work group results. Exercises considerable influence, often negotiating with others and representing the organization on major initiatives. Typically provides guidance and mentorship to colleagues, often managing moderately complex projects or teams. Benefits & Compensation Medtronic offers a competitive Salary and flexible Benefits Package A commitment to our employees lives at the core of our values. We recognize their contributions. They share in the success they help to create. We offer a wide range of benefits, resources, and competitive compensation plans designed to support you at every career and life stage. This position is eligible for a short-term incentive called the Medtronic Incentive Plan (MIP). We lead global healthcare technology and boldly attack the most challenging health problems facing humanity by searching out and finding solutions. Our Mission to alleviate pain, restore health, and extend life unites a global team of 95, 000+ passionate people. We are engineers at heart putting ambitious ideas to work to generate real solutions for real people. From the R&D lab, to the factory floor, to the conference room, every one of us experiments, creates, builds, improves and solves. We have the talent, diverse perspectives, and guts to engineer the extraordinary. Learn more about our business, mission, and our commitment to diversity here
Posted 1 week ago
2.0 - 4.0 years
3 - 4 Lacs
Hyderabad
Work from Office
About the Role: Seeking a highly skilled Guidewire BillingCenter Configuration Developer with 6 years of experience to join our dynamic team. Requirements: Sound expertise in BillingCenter configuration and customization including Billing Accounts, Payment Plans, Invoicing, Collections and Disbursements. Experience with Gosu programming language and PCF files customization. Proficient in insurance billing lifecycle and payment processing. Knowledge of integration technologies including SOAP/REST web services and messaging. Familiarity with Guidewire Studio and Guidewire Data Model. Experience with database querying using SQL (Oracle, SQL Server). Understanding of Agile/Scrum development methodologies. Strong analytical and problem-solving skills. #LI-KS2 #LI-Onsite
Posted 1 week ago
2.0 - 7.0 years
9 - 10 Lacs
Chennai
Work from Office
About the Role: We are looking for an Azure Data Engineer 4+ years of expertise in Azure Databricks, Azure Data Factory, Azure DevOps, Azure Storage/ Data Lake and Analytics development. , Relational database and SQL language. Requirements: Create ER diagrams and write relational database queries. Generate database objects and maintain referential integrity. Configure, deploy and maintain database. Participate in development and maintenance of Data warehouses Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models Define and govern data modelling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modelling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Proficient in Azure SQL, Cosmos DB and Azure Synapse. Provide technical design, coding assistance to the team to accomplish the project deliverables as planned/scoped. #LI-Hybrid #LI-AS1
Posted 1 week ago
2.0 - 4.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Job Title Software Engineer II -C# Full Stack Job Description Role and Responsibilities : Build scalable, secure, and HIPAA compliant cloud-based services. Design and develop platform services using Microsoft stack technologies, specifically . NET Framework / C#, SQL Lite, React, Filament, WCF, WPF. Evaluate latest technologies and develop POCs to demonstrate the relevance of these technologies Work collaboratively in a SAFe Agile/Scrum team Troubleshoot integration and production issues in cloud services and cloud infrastructure. Collaborate with Operations team to help maintain various environments to meet uptime, performance, cost, and security goals. Qualifications : 2- 4 years of experience in the design and development of cloud-based solutions Proficient in modern application development environment using a combination of . NET/C# and Javscript Web frameworks (React, Filament) Proficient in data modeling, logical and physical database design using MS SQL Experience in building triggers, stored procedures/functions using SQL, T-SQL Knowledge of data interchange formats like XML, JSON Experience working in an agile scrum environment Excellent communication skills and work well in teams. Bachelors degree (BA / BS) in Computer Science, Information Systems or related field Preferred Skills: Knowledge of white automation Familiarity with healthcare industry Familiarity with cryptography How we work together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. this role is an office role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we wont stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business . Discover our rich and exciting history . Learn more about our purpose . If you re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here . #LI-EU #LI-Hybrid #LI-PHILIN
Posted 1 week ago
3.0 - 7.0 years
10 - 11 Lacs
Hyderabad
Work from Office
Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC, and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist- CMDB Admin In this role, you will: Support the management and maintenance of CMDB and align it with the Common Service Data Model (CSDM) framework, ensuring data accuracy, supporting IT processes, and enable improved decision-making through reliable configuration data. Enable the ongoing maturity of CMDB by support the ongoing development of Technical Service, Data Integrations and CI class administration. Support the requirements to deliver reports and dashboards providing visibility into CMDB health which provide actionable insights for key stakeholders based. Assist with the ongoing review and refinement of CMDB processes and policies to support alignment to the agreed data models and CSDM strategy Be responsible for regularly interfacing with CMDB Product Owner and the wider team to provide updates on progress and the benefits of what s being delivered. Requirements Proactive self-starter. Experience of Agile practices (e. g. Scrum, Kanban) Passion to improve the customer experience. Proficiency in gathering, analyzing and presenting information in a clear, concise, and accurate manner. Experience working with ServiceNow or other ITSM tools. Understanding of Configuration Management and ServiceNow Common Service Data Model would be beneficial.
Posted 1 week ago
1.0 - 9.0 years
20 - 25 Lacs
Hyderabad
Work from Office
If you are looking for a game-changing career, working for one of the worlds leading financial institutions, you ve come to the right place. As a Principal Software Engineer at JP Morgan Chase within the Consumer & Community Banking Technology Team, you, you provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various technologies to support one or more of the firm s portfolios. Job responsibilities Creates complex and scalable coding frameworks using appropriate software design frameworks Develops secure and high-quality production code, and reviews and debugs code written by others Advises cross-functional teams on technological matters within your domain of expertise Serves as the function s go-to subject matter expert Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies Creates durable, reusable software frameworks that are leveraged across teams and functions Influences leaders and senior stakeholders across business, product, and technology teams Champions the firm s culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on data management concepts and 10+ years applied experience. Experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise. Proven experience in designing and developing large scale data pipelines for batch & stream processing Strong understanding of Data Warehousing, Data Lake, ETL processes and Big Data technologies (e. g Hadoop, Snowflake, Databricks, Apache Spark, PySpark, Airflow, Apache Kafka, Java, Open File & Table Formats, GIT, CI/CD pipelines etc. ) Expertise with public cloud platforms (e. g. , AWS, Azure, GCP) and modern data processing & engineering tools Excellent communication, presentation, and interpersonal skills Experience developing or leading large or cross-functional teams of technologists Demonstrated prior experience influencing across highly matrixed, complex organizations and delivering value at scale Experience leading complex projects supporting system design, testing, and operational stability Experience with hiring, developing, and recognizing talent Extensive practical cloud native experience Expertise in Computer Science, Computer Engineering, Mathematics, or a related technical field Preferred qualifications, capabilities, and skills Experience working at code level and ability to be hands-on performing PoCs, code reviews Experience in Data Modeling (ability to design Conceptual, Logical and Physical Models, ERDs and proficiency in data modeling software like ERwin) Experience with Data Governance, Data Privacy & Subject Rights, Data Quality & Data Security practices Strong understanding of Data Validation / Data Quality Experience with supporting large scale AI/ML Data requirements Experience in Data visualization & BI tools is a huge plus If you are looking for a game-changing career, working for one of the worlds leading financial institutions, you ve come to the right place. As a Principal Software Engineer at JP Morgan Chase within the Consumer & Community Banking Technology Team, you, you provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes across various technologies to support one or more of the firm s portfolios. Job responsibilities Creates complex and scalable coding frameworks using appropriate software design frameworks Develops secure and high-quality production code, and reviews and debugs code written by others Advises cross-functional teams on technological matters within your domain of expertise Serves as the function s go-to subject matter expert Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies Creates durable, reusable software frameworks that are leveraged across teams and functions Influences leaders and senior stakeholders across business, product, and technology teams Champions the firm s culture of diversity, opportunity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on data management concepts and 10+ years applied experience. Experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise. Proven experience in designing and developing large scale data pipelines for batch & stream processing Strong understanding of Data Warehousing, Data Lake, ETL processes and Big Data technologies (e. g Hadoop, Snowflake, Databricks, Apache Spark, PySpark, Airflow, Apache Kafka, Java, Open File & Table Formats, GIT, CI/CD pipelines etc. ) Expertise with public cloud platforms (e. g. , AWS, Azure, GCP) and modern data processing & engineering tools Excellent communication, presentation, and interpersonal skills Experience developing or leading large or cross-functional teams of technologists Demonstrated prior experience influencing across highly matrixed, complex organizations and delivering value at scale Experience leading complex projects supporting system design, testing, and operational stability Experience with hiring, developing, and recognizing talent Extensive practical cloud native experience Expertise in Computer Science, Computer Engineering, Mathematics, or a related technical field Preferred qualifications, capabilities, and skills Experience working at code level and ability to be hands-on performing PoCs, code reviews Experience in Data Modeling (ability to design Conceptual, Logical and Physical Models, ERDs and proficiency in data modeling software like ERwin) Experience with Data Governance, Data Privacy & Subject Rights, Data Quality & Data Security practices Strong understanding of Data Validation / Data Quality Experience with supporting large scale AI/ML Data requirements Experience in Data visualization & BI tools is a huge plus
Posted 1 week ago
7.0 - 12.0 years
40 - 45 Lacs
Chennai
Hybrid
Role : Data Engineer/Architect Experience : 7 to 16 Years Location : Chennai(3 days office in a week) Mandatory Skills : Data Warehousing, Data Modelling, Snowflake, Data Build Tool(DBT), SQL, Any cloud(AWS/Azure/GCP), Python/Pyspark(Good to have) Overview of the requirement: We are looking for a skilled Data Architect/ Sr. Data Engineer to design and implement data solutions supporting Marketing, Sales, and Customer Service areas. The ideal candidate will have experience with DBT , Snowflake , Python(Good to have) and Azure/AWS/GCP , along with a strong foundation Cloud Platforms . You will be responsible for developing scalable, efficient data architectures that enable personalized customer experiences and advanced analytics. Roles and Responsibility: Implement and maintain data warehousing solutions in Snowflake to handle large-scale data processing and analytics needs. Optimize workflows using DBT to streamline data transformation and modeling processes. Strong expertise in SQL with hands-on experience in querying, transforming, and analysing large datasets. Solid understanding of data profiling, validation, and cleansing techniques. Strong understanding of data modeling , ETL/ELT processes , and modern data architecture frameworks. Expertise with cloud data platforms (Azure/AWS/GCP) for large-scale data processing. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data warehouses on Snowflake. Optimize database performance and ensure data quality. Troubleshoot and resolve technical issues related to data processing and analysis. Participate in code reviews and contribute to improving overall code quality. Job Requirements: Strong understanding of data modeling and ETL concepts. Experience with Snowflake and DBT is highly desirable. Strong expertise in SQL with hands-on experience in querying, transforming, and analysing large datasets. Expertise with cloud data platforms (Azure/AWS/GCP) for large-scale data processing. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Familiarity with agile development methodologies.
Posted 1 week ago
5.0 - 10.0 years
12 - 20 Lacs
Bengaluru
Work from Office
Senior Data Scientist Req number: R5797 Employment type: Full time Worksite flexibility: Hybrid Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We’re searching for an experienced Senior Data Scientist who excels at statistical analysis, feature engineering, and end to end machine learning operations. Your primary mission will be to build and productionize demand forecasting models across thousands of SKUs, while owning the full model lifecycle—from data discovery through automated re training and performance monitoring. This is a Full-time and Hybrid position. Job Description What You’ll Do Advanced ML Algorithms: Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Data Analysis & Feature Engineering: • Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. TimeSeries Forecasting (Demand): • Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. MLOps & Model Lifecycle: • Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Statistical Analysis & Experimentation: • Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Collaboration & Leadership: Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. What You'll Need M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (Big Query, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Physical Demands This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc. Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor. Reasonable accommodation statement If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to application.accommodations@cai.io or (888) 824 – 8111.
Posted 1 week ago
4.0 - 8.0 years
18 - 20 Lacs
Pune, Thiruvananthapuram
Work from Office
Senior Software Engineer A demanding role within AD&M Group that involves development and maintenance of projects in Postgres DB/Java related technologies. T his role offers opportunities to undertake project work under the supervision of Project Manager. • Proven experience as a Data Warehouse Developer • Proficiency in SAS, Oracle-DB and LINUX • Proficiency in PostgreSQL-DB processes • Strong knowledge of SQL and database design • Experience with ETL-tool of DI-Studio and corresponding processes • Understanding of Data Warehouse concepts, data warehouse architecture and data modelling • Strong experience in creating and maintaining Azure Synapse Pipeline processes • Experience with source code administration and deployment-process via GitHub / Jenkins • Excellent problem-solving skills and attention to detail • Strong communication skills and customer focus • Ability to manage projects / requests simultaneously and meet deadlines • Strong self-employment and the will of getting things done to the end • Proficiency in PostgreSQL-DB processes, Strong knowledge of SQL and database design. Understanding of Data Warehouse concepts, data warehouse architecture and data modelling. Proven experience as a Data Warehouse Developer Proficiency in SAS, Oracle-DB and LINUX, Experience with ETL-tool of DI-Studio and corresponding processes Graduate or Postgraduate in Engineering or a Post graduate in non-engineering disciplines Any certification in Java technology will be an added advantage 4-6 years of experience -Databases: Postgres, Oracle -Languages: SQL, PLSQL, Unix -Tools: Toad, Dbeaver, SQL Developer, JIRA, Git, SAS ETL
Posted 1 week ago
0.0 - 5.0 years
7 - 10 Lacs
Noida
Work from Office
Principal Data Scientist / Senior Data Scientist / Data Scientist - NLP / Machine Learning Job Position : Principal Data Scientist / Sr Data Scientists /Data Scientist / Jr. Data Scientist / Data Science Intern / Machine Learning Engineer-Global Tech AI Health Startup. Joining : Immediately. Global HQ, Benovymed Global Technology Development and Innovation Center & Benovymed Global R and D, and AI Health Innovation Center in Delhi NCR. Job Role Responsibility Description : We are looking for passionate, Strong experience with an entrepreneur mindset to join us as Hardcore Full Stack Data Scientist end-to-end single-handed multi-hat role who is already working in applied AI in ML, deep Learning, ANN, and CNN platform specifically working as single-handed doers or in a small Data Science Team with full end to end ownership of Data Science work commitment delivery as Data Scientist working in any reputed AI Data Science is driven Tech startup preferably in Healthcare. Required Skillsets : - Needs to have statistical, mathematical, and predictive modeling as well as business strategy skills to build the algorithms necessary to ask the right questions and find the right answers. - Need to be able to communicate their findings, orally and visually. - Need to understand how the products are developed and even more important, as big data touches the privacy of consumers, they need to have a set of ethical responsibilities.- - Need to have a natural desire to go beneath the surface of a problem. - Confident and self-secure as they more often than not will have to deal with situations where there is a lot of unknown. - Good knowledge & experience in Artificial Neural Network ( ANN ), AI conversational Chatbot & Needs to be able to write the required Algorithm and coding programming. - Preferably in different programming languages such as Python, Tensor Flow, Keras, AWS (ML and DL), Numpy, and Pytorch. In addition the need to be familiar with disciplines as follows : - Natural Language Processing, Machine learning, Conceptual modeling, Statistical analysis, Predictive Modeling, and Hypothesis testing. Data scientists should have at least some of the following capabilities : - Cleaning data, formatting data, Having the ability to query databases & perform statistical analysis, building AI Data Modelling, validating the AI Model, and deployment. - Being able to develop or program databases. - Having a good understanding of design & architecture principles. Qualification : B.Tech / M.Tech in CSE / Electrical / Electronics & Communication/ Maths & Computing /Statistics /BioTech/BioStat / Clinical Engineering in Data Science / Machine Learning from Top-rated Tech University / Top rated IIT /IIIT. Mandatory Requirement : Experience : - Post last qualification of at least 0 to 7 Years of strong experience exclusive in real Industry Projects in applied AI ML & Deep Learning Data Science in an early-stage AI Health startup at the role and designation as a single-handed full-stack Principal Data Scientist / Sr. Data Scientist. - Hardcore in ANN /Deep Learning /Machine Learning/NLP with at least 2-20 good Applied AI-oriented Applied Research Projects & applications with hardcore experience working on ANN, CNN, RNN, and deep learning in Digital Health startups to solve specific complex problems. CTC : - Best Package in Industry. - Terms & Conditions Apply as per our organization. - To deserving eligible candidates only, the company Equity / ESOP may be given also as a part of a Package for a long-term career opportunity. - Apply with your CV with passport size photo, & current CTC, expected CTC, Notice Period, Project details & cover letter. We have a disruptive Innovative Technology Products Line and Novel Solutions delivered through Mobile App / Web on our Cloud Digital Health Platform which is a Game changer, will surpass the current medical Practices, and is driven by Scientific evidence on actionable data-driven in Prevention, early detection, Treatment, Consultation, Clinical decision, Disease management areas of Top5 Chronic Diseases ( Including Top 20 Cancer Diseases|Diabetes|HeartDisease|Mental Health|COPD ) and COVID-19 Pandemic Management Globally to Impact the Lives of Billions of People Globally in a fastest-growing @51.3% CAGR in a few Trillion Dollars AI Health Global Market and Chronic Diseases Global Market. We will be looking to join Professionals in our 360-degree domain expertise areas | 200 Medical Doctors and Medical Researchers and 250 Data Scientists |AI Health Researchers Scientists| and with a total of more than 5000 Growth Drivers professionals with more than 2000 IT professionals| 2000 Business Development |Marketing and Sales|Global Operations of high energy Growth Drivers entrepreneur mindset (not the traditional employees) from across the world's Best Technology and Research Institutes in next 5 Years Timeline at our coming up India Global HQ, Global Technology Development and Innovation Center and Benovymed Global R and D and AI Health Innovation Lab to work on Thousands of complex problems in Healthcare to bring change in Healthcare to Impact the Life of Billions of People Globally to support our country "Make in India" Programme with solving every country local problems, local challenges and global problems globally.
Posted 1 week ago
0.0 - 5.0 years
2 - 6 Lacs
Gurugram
Work from Office
Joining : Immediately. Job Role Responsibility Description : We are looking for passionate, Strong experience with an entrepreneur mindset to join us as Hardcore Full Stack Data Scientist end-to-end single-handed multi-hat role who is already working in applied AI in ML, deep Learning, ANN, and CNN platform specifically working as single-handed doers or in a small Data Science Team with full end to end ownership of Data Science work commitment delivery as Data Scientist working in any reputed AI Data Science is driven Tech startup preferably in Healthcare. Required Skillsets : - Needs to have statistical, mathematical, and predictive modeling as well as business strategy skills to build the algorithms necessary to ask the right questions and find the right answers. - Need to be able to communicate their findings, orally and visually. - Need to understand how the products are developed and even more important, as big data touches the privacy of consumers, they need to have a set of ethical responsibilities.- - Need to have a natural desire to go beneath the surface of a problem. - Confident and self-secure as they more often than not will have to deal with situations where there is a lot of unknown. - Good knowledge & experience in Artificial Neural Network ( ANN ), AI conversational Chatbot & Needs to be able to write the required Algorithm and coding programming. - Preferably in different programming languages such as Python, Tensor Flow, Keras, AWS (ML and DL), Numpy, and Pytorch. In addition the need to be familiar with disciplines as follows : - Natural Language Processing, Machine learning, Conceptual modeling, Statistical analysis, Predictive Modeling, and Hypothesis testing. Data scientists should have at least some of the following capabilities : - Cleaning data, formatting data, Having the ability to query databases & perform statistical analysis, building AI Data Modelling, validating the AI Model, and deployment. - Being able to develop or program databases. - Having a good understanding of design & architecture principles. Qualification : B.Tech / M.Tech in CSE / Electrical / Electronics & Communication/ Maths & Computing /Statistics /BioTech/BioStat / Clinical Engineering in Data Science / Machine Learning from Top-rated Tech University / Top rated IIT /IIIT. Mandatory Requirement : Experience : - Post last qualification of at least 0 to 7 Years of strong experience exclusive in real Industry Projects in applied AI ML & Deep Learning Data Science in an early-stage AI Health startup at the role and designation as a single-handed full-stack Principal Data Scientist / Sr. Data Scientist. - Hardcore in ANN /Deep Learning /Machine Learning/NLP with at least 2-20 good Applied AI-oriented Applied Research Projects & applications with hardcore experience working on ANN, CNN, RNN, and deep learning in Digital Health startups to solve specific complex problems.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France