Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration and ETL processes.- Experience with cloud computing platforms and services.- Familiarity with programming languages such as Python or Scala.- Knowledge of data visualization techniques and tools. Additional Information:- The candidate should have minimum 2 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Chennai office.- A 15 years full time education is required.- Candidate should be ready to work in rotational shift Qualification 15 years full time education
Posted 1 week ago
4.0 - 9.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and problem-solving among team members. Roles & Responsibilities:Minimum of 4 years of experience in data engineering or similar roles.Proven expertise with Databricks and data processing frameworks.Technical Skills SQL, Spark, Py spark, Databricks, Python, Scala, Spark SQLStrong understanding of data warehousing, ETL processes, and data pipeline design.Experience with SQL, Python, and Spark.Excellent problem-solving and analytical skills.Effective communication and teamwork abilities. Professional & Technical Skills: Experience and knowledge of Azure SQL Database, Azure Data Factory, ADLS Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Data Services.- This position is based in Pune.- A 15 year full time education is required. Qualification 15 years full time education
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who are we? Infosys (NYSE: INFY) is a global leader in consulting, technology, and outsourcing solutions. We enable clients, in more than 46 countries, to stay a step ahead of emerging business trends and outperform the competition. Infosys Consulting (IC) partners with clients from strategy through execution to transform their businesses in areas such as business /IT strategy, processes, organization, systems and risk. Infosys Consulting has 2600+ people across the US, Europe, APAC, and India, contributing over $628m in consulting revenue annually. We are Value Integrators – we deliver realized business value by managing transformations from strategy / setting direction through execution, including operating and optimizing delivered solutions. IC – SURE (Services, Utilities, Resources & Energy) is dedicated to serving Oil & Gas , Utilities, Resources and Service firms globally. The team in India works with its overseas counterparts and client teams to provide business consulting services to clients in the US, Europe, and Asia Pacific markets. Responsibilities: Supporting pursuits with large Oil & Gas/Utilities prospects by articulating Infosys’ unique value proposition through practical use cases across the value chain. Gathering, identifying, and documenting business requirements and creating functional specifications for new systems and processes. Assessing as-is processes, conducting gap analysis, designing to-be processes, and recommending changes. Experience with Six Sigma, Lean, or similar methodologies to drive continuous improvement in technology projects. Technology Project Management, including managing technology vendors and client stakeholders Managing large projects and programs in a multi-vendor, globally distributed team environment, leveraging Agile principles and DevOps capabilities. Collaborating closely with the IT Project Management Office. Supporting the implementation of client-specific digital solutions, including business case development, IT strategy, and tool/software selection. Design and implement scalable data pipelines, ETL/ELT workflows, and optimized data models across cloud data warehouses and lakes, enabling reliable access to high-quality data for business insights and strategic decision-making. Build and maintain dashboards, reports, and visualizations using tools such as Power BI, Tableau, etc. Write SQL queries and scripts to extract, clean, and manipulate data from multiple sources and conduct deep-dive analyses to evaluate business performance, identify opportunities, and support operational decisions. Integrate and govern data from diverse enterprise systems, ensuring data quality, integrity, and compliance with security and governance standards—supporting business-critical reporting, analytics, and regulatory needs. Collaborate with business stakeholders to translate strategic objectives into data-driven solutions, defining KPIs, uncovering actionable insights from structured and unstructured data, and enabling self-service analytics through partnerships with analysts and product teams. Working closely with client IT teams and business stakeholders to uncover opportunities and derive actionable insights. Participating in internal firm-building activities such as knowledge management. Supporting sales efforts for new and existing clients through proposal creation and sales presentation facilitation. Document data workflows, solutions, and processes clearly for both technical teams and business users. Work in Agile teams to manage data projects, align with PMO initiatives, and ensure business-focused delivery in global, multi-vendor environments. Support digital solution delivery including IT strategy, business case development, tool selection, and implementation. Contribute to client pursuits and internal knowledge-sharing by presenting digital use cases and supporting proposal development. Required Qualifications: 3–5 years of experience in data engineering with a strong track record in business-facing roles such as Business Analysis, Product Design, or Project Management—ideally within digital technology initiatives in the Oil & Gas or Utilities sector. Strong grasp of business analysis principles with proven experience in gathering and documenting requirements, and translating business needs into effective technical designs Excellent communication skills—both written and verbal—with the ability to convey ideas to technical and non-technical audiences, Skilled in data integration, transformation, and orchestration tools such as AWS Glue, Pyspark, Python, Azure Data Factory, SparkSQL, SQL, Palantir, data bricks Pipeline Builder, with hands-on experience using project and workflow tools like Azure DevOps (ADO), JIRA, VSTS, or ServiceNow (SNOW). Skilled in data visualization tools such as Power BI, Tableau, Palantir Contour, Palantir Workshop or similar, with hands-on experience using project and workflow tools like Azure DevOps (ADO), JIRA, VSTS, or ServiceNow (SNOW). Broad understanding of one or more modern digital technologies (e.g., Robotic Process Automation, Digital Transformation, Business Intelligence, AI/ML, Big Data, Data Analytics, IoT). Bachelor’s degree or Full-time MBA/PGDM from Tier 1/Tier 2 B-Schools in India or foreign equivalent. Preferred Qualifications: Knowledge of one or more digital technologies (Robotic Process Automation, Digital transformation, Business Intelligence, Artificial Intelligence, Machine Learning, Big Data technologies, Data Analytics, IoT etc.) and its application in Oil & Gas/Utilities Industry Strong knowledge of agile development practices (Scrum), methodologies and tools. Excellent teamwork, written and verbal communication skills. Ability to communicate ideas in both technical and user-friendly language. Ability to work as part of a cross-cultural team including flexibility to support multiple time zones when necessary Ability to interact at mid-level managers of clients’ organizations Understanding of SDLC (Software Development Lifecycle) Proven ability to work in multidisciplinary teams and to build strong relationships with clients Preferred Location(s): Electronic City, Phase 1, Bengaluru, Karnataka Pocharam Village, Hyderabad, Telangana Sholinganallur, Chennai, Tamil Nadu Hinjewadi Phase 3, Pune, Maharashtra Sector 48, Tikri, Gurgaon, Haryana Kishangarh, Chandigarh Jaipur Ahmedabad Indore Location of Posting is subject to business needs and requirement The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email, or face to face. Please note this description does not cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee. EOE/Minority/Female/Veteran/Disabled/Sexual Orientation/Gender Identity
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in shaping the direction of application projects and ensuring that they meet the needs of the organization and its clients. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with PySpark, Scala.- Strong understanding of data engineering principles and practices.- Experience with cloud-based data solutions and architectures.- Familiarity with data governance and compliance standards. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will require you to balance technical oversight with team management, fostering an environment of innovation and collaboration. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: - Candidate must have cloud knowledge preferred AWS - Must have coding experience in Pythonand Spark framework - Mandatory SQL knowledge - Good to have exposure to CI/CD, Docker containers - Strong Verbal and written communication - Strong analytical and problem-solving skills. Additional Information:- The candidate should have minimum 7.5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required.- Candidates to be flexible to work in rotational shifts. Qualification 15 years full time education
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Palantir Foundry Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Project Role:Lead Data Engineer Project Role Description:Design, build and enhance applications to meet business process and requirements in Palantir foundry.Work experience:Minimum 6 years Must have Skills: Palantir Foundry, PySparkGood to Have Skills: Experience in PySpark, python and SQLKnowledge on Big Data tools & TechnologiesOrganizational and project management experience.Job Requirements & Key Responsibilities:Responsible for designing, developing, testing, and supporting data pipelines and applications on Palantir foundry.Configure and customize Workshop to design and implement workflows and ontologies.Collaborate with data engineers and stakeholders to ensure successful deployment and operation of Palantir foundry applications.Work with stakeholders including the product owner, data, and design teams to assist with data-related technical issues and understand the requirements and design the data pipeline.Work independently, troubleshoot issues and optimize performance.Communicate design processes, ideas, and solutions clearly and effectively to team and client. Assist junior team members in improving efficiency and productivity.Technical Experience:Proficiency in PySpark, Python and SQL with demonstrable ability to write & optimize SQL and spark jobs.Hands on experience on Palantir foundry related services like Data Connection, Code repository, Contour, Data lineage & Health checks.Good to have working experience with workshop, ontology, slate.Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry.Experience in ingesting data from different external source systems using data connections and sync.Good Knowledge on Spark Architecture and hands on experience on performance tuning & code optimization.Proficient in managing both structured and unstructured data, with expertise in handling various file formats such as CSV, JSON, Parquet, and ORC.Experience in developing and managing scalable architecture & managing large data sets.Good understanding of data loading mechanism and adeptly implement strategies for capturing CDC.Nice to have test driven development and CI/CD workflows.Experience in version control software such as Git and working with major hosting services (e. g. Azure DevOps, GitHub, Bitbucket, Gitlab).Implementing code best practices involves adhering to guidelines that enhance code readability, maintainability, and overall quality.Educational Qualification:15 years of full-term education Qualification 15 years full time education
Posted 1 week ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in risk and compliance focus on maintaining regulatory compliance and managing risks for clients, providing advice, and solutions. They help organisations navigate complex regulatory landscapes and enhance their internal controls to mitigate risks effectively. Those in enterprise risk management at PwC will focus on identifying and mitigating potential risks that could impact an organisation's operations and objectives. You will be responsible for developing business strategies to effectively manage and navigate risks in a rapidly changing business environment. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the OFRO - QA team you will đảm bảo the quality and accuracy of dashboards and data workflows through meticulous testing and validation. As a Senior Associate, you will leverage your knowledge in data analysis and automation testing to mentor others, navigate complex testing environments, and uphold quality standards throughout the software development lifecycle. This position provides an exciting opportunity to work with advanced BI tools and contribute to continuous improvement initiatives in a dynamic team setting. Key Responsibilities ETL Development & Data Engineering Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and custom Python scripts. Integrate and ingest data from on-prem, cloud, and third-party APIs into modern data platforms. Perform data cleansing, validation, and transformation to ensure data quality and consistency. Machine learning experience is desirable. Programming and Scripting Write robust and reusable Python scripts for data processing, automation, and orchestration. Develop complex SQL queries for data extraction, transformation, and reporting. Optimize code for performance, scalability, and maintainability. Cloud & Platform Integration Work within Azure ecosystems, including Blob Storage, SQL Database, ADF, Synapse, and Key Vault. Use Databricks (PySpark/Delta Lake) for advanced transformations and big data processing. Good to have PowerBI hands-on. Collaboration And Communication Work closely with cross-functional teams to ensure quality throughout the software development lifecycle. Provide regular status updates and test results to stakeholders. Participate in daily stand-ups, sprint planning, and Agile ceremonies. Shift time : 2pm to 11pm IST Total exp required - 4-9 years
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Coimbatore
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in fostering a collaborative environment that encourages innovation and efficiency in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance of applications. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Coimbatore.- A 15 years full time education is required. Candidate should be ready to work in rotational shift Qualification 15 years full time education
Posted 1 week ago
6.0 - 8.0 years
4 - 6 Lacs
Kochi, Chennai, Coimbatore
Hybrid
Required Skills: 5+Years of experience Azure Databricks PySpark Azure Data Factory
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Customers & Products Job Family Group: Project Management Group Job Description: As bp transitions to a coordinated energy company, we must adapt to a changing world and maintain driven performance. bp’s customers & products (C&P) business area is setting up a business and technology centre (BTC) in CITY, COUNTRY . This will support the delivery of an enhanced customer experience and drive innovation by building global capabilities at scale, using technology, and developing deep expertise . The BTC will be a core and connected part of our business, bringing together colleagues who report into their respective part of C&P, working together with other functions across bp. This is an exciting time to join bp and the customers & products BTC! Job Title: Data Modeller SME Lead About the role: As the Data Modeller Senior SME for Castrol you will collaborate with business partners across Digital Operational Excellence, Technology, and Castrol’s PUs, HUBs, functions, and Markets to model and sustain curated datasets within the Castrol Data ecosystem. The role ensures agile, continuous improvement of curated datasets aligned with the Data Modelling Framework and supports analytics, data science, operational MI, and the broader Digital Business Strategy. On top of the Data lake we now have enabled the MLOPS environment (PySpark Pro) and Gurobi with direct connections to run the advance analytics and data science queries and algorithms written in python. This enables the data analyst and data science team to incubate insights in an agile way. The Data Modeller role will chip in and enable the growth trajectory on data science skills and capabilities within the role, the team and the wider Castrol data analyst/science community, data science experience is a plus but basic skills would suffice to start. Experience & Education: Education: Degree in an analytical field (preferably IT or engineering) or 5+ years of relevant experience Experience: Proven track record in delivering data models and curated datasets for major transformation projects. Broad understanding of multiple data domains and their integration points. Strong problem-solving and collaborative skills with a strategic approach. Skills & Competencies: Expertise in data modeling, data wrangling of highly complex, high-dimensional data (ER Studio, Gurobi, SageMaker PRO). Proficiency in translating analytical insights from high-dimensional data. Skilled in PowerBI data modeling and proof of concept design for data and analytics dashboarding. Proficiency in Data Science tools such as Python, Amazon SageMaker, GAMS, AMPL, ILOG, AIMMS, or similar. Ability to work across multiple levels of detail, including Analytics, MI, statistics, data, process design principles, operating model intent, and systems design. Strong influencing skills to use expertise and experience to shape value delivery. Demonstrated success in multi-functional deployments and performance optimization. BP Behaviors for Successful Delivery: Respect: Build trust through clear relationships Excellence : Apply standard processes and strive for executional completion One Team: Collaborate to improve team efficiency You will work with: You will be part of a 20 member Global Data & Analytics Team. You will operate peer to peer in a team of global seasoned experts on Process, Data, Advanced Analytics and Data Science. The Global Data & Analytics team reports into the Castrol Digital Enablement team that is managing the digital estate for Castrol where we enhance scalability, process and data integration. This D&A team is the driving force behind the Data & Analytics strategy managing the Harmonized Data Lake and the Business Intelligence derived from it, in support of the Business strategy and is a key pilar of value enablement through fast and accurate insights. As the Data Modeller SME lead you will be exposed to a wide variety of collaborators in all layers of the Castrol Leadership and our partners in GBS and Technology. Through Data Governance at Value centre you have great exposure to the operations and have the ability to influence and inspire change through value preposition engagements. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is not available for remote working Skills: Change control, Commissioning, start-up and handover, Conflict Management, Construction, Cost estimating and cost control (Inactive), Design development and delivery, Frameworks and methodologies, Governance arrangements, Performance management, Portfolio Management, Project and construction safety, Project execution planning, Project HSSE, Project Leadership, Project Team Management, Quality, Requirements Management, Reviews, Risk Management, Schedule and resources, Sourcing Management, Stakeholder Management, Strategy and business case, Supplier Relationship Management Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 1 week ago
4.0 - 9.0 years
9 - 19 Lacs
Noida, Hyderabad, Pune
Work from Office
Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential Responsibilities: Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager Build and maintain ETL/ELT pipelines for both batch and streaming data. Work with structured and unstructured datasets at scale. Apply Data Modeling principles and advanced SQL techniques. Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. Collaborate with product teams to understand requirements and deliver optimized data solutions. Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. Work independently with minimal supervision and strong ownership of deliverables. Must Have: 4+ years of experience in Data Engineering on AWS Cloud. Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer Professional (mandatory) Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimizatio
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Position : Senior Software Engineer - 7 to 10 years exp (Python and Golang) Work Mode - Remote Years of Experience: 7- 10years (5+ years exp in Python) Office Location - SB Road, Pune, Remote (for other locations) Qualifications – Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Responsibilities & Skills: ● Design, develop, and maintain high-quality software applications using Python and the Django framework. ● Collaborate with cross-functional teams to define, design, and ship new features and enhancements. ● Integrate third-party APIs (REST, SOAP, streaming services) into the existing product. ● Optimize application performance and ensure scalability and reliability. ● Write clean, maintainable, and efficient code, following best practices and coding standards. ● Participate in code reviews and provide constructive feedback to peers. ● Troubleshoot and debug applications, identifying root causes of issues. ● Stay current with industry trends, technologies, and best practices in software development. Required Skills (Python): ● Bachelor’s or Master’s degree in Computer Science or related field from IIT, NIT, or any other reputed institute. ● 3-10 years of experience in software development, with at least 4 years of background in Python and Django . ● Working knowledge of Golang (Mandatory) ● Experience integrating third-party APIs (REST, SOAP, streaming services) into applications. ● Familiarity with database technologies, particularly MySQL(must have) and HBase.(nice of have) ● Experience with message brokers like Kafka (must) , Rabbitmq and Redis ● Experience on Version control systems such as Github ● Familiarity with RESTful APIs and integration of third-party APIs. ● Strong understanding of software development methodologies, particularly Agile ● Demonstrable experience with writing unit and functional tests ● Excellent problem-solving skills and ability to work collaboratively in a team environment. ● Experience with database systems such as PostgreSQL, MySQL, or MongoDB. Good To Have: ● Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience ● Knowledge on IEEE 2030.5 standard (Protocol) ● Knowledge on ● Serverless architecture, preferably AWS Lambda ● Experience with PySpark, Pandas, Scipy, Numpy libraries is a plus Experience in microservices architecture ● Solid CI/CD experience ● You are a Git guru and revel in collaborative workflows ● You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide ● Knowledge of modern authorization mechanisms, such as JSON Web Token ● Good to have front end technologies like - ReactJS, NodeJS
Posted 1 week ago
5.0 years
0 Lacs
Greater Ahmedabad Area
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
This role is for one of Weekday's clients Min Experience: 7 years Location: Gurugram JobType: full-time Requirements We are looking for an experienced Data Engineer with deep expertise in Azure and/or AWS Databricks to join our growing data engineering team. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing data pipelines, enabling seamless data integration and real-time analytics. This role is ideal for professionals who have hands-on experience with cloud-based data platforms, big data processing frameworks, and a strong understanding of data modeling, pipeline orchestration, and performance tuning. You will work closely with data scientists, analysts, and business stakeholders to deliver scalable and reliable data infrastructure that supports high-impact decision-making across the organization. Key Responsibilities: Design and Development of Data Pipelines: Design, develop, and maintain scalable and efficient data pipelines using Databricks on Azure or AWS. Integrate data from multiple sources including structured, semi-structured, and unstructured datasets. Implement ETL/ELT pipelines for both batch and real-time processing. Cloud Data Platform Expertise: Use Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, or similar services to build robust and secure data workflows. Optimize storage, compute, and processing costs using appropriate services within the cloud environment. Data Modeling & Governance: Build and maintain enterprise-grade data models, schemas, and lakehouse architecture. Ensure adherence to data governance, security, and privacy standards, including data lineage and cataloging. Performance Tuning & Monitoring: Optimize data pipelines and query performance through partitioning, caching, indexing, and memory management. Implement monitoring tools and alerts to proactively address pipeline failures or performance degradation. Collaboration & Documentation: Work closely with data analysts, BI developers, and data scientists to understand data requirements. Document all processes, pipelines, and data flows for transparency, maintainability, and knowledge sharing. Automation and CI/CD: Develop and maintain CI/CD pipelines for automated deployment of data pipelines and infrastructure using tools like GitHub Actions, Azure DevOps, or Jenkins. Implement data quality checks and unit tests as part of the data development lifecycle. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 7+ years of experience in data engineering roles with hands-on experience in Azure or AWS ecosystems. Strong expertise in Databricks (including notebooks, Delta Lake, and MLflow integration). Proficiency in Python and SQL; experience with PySpark or Spark strongly preferred. Experience with data lake architectures, data warehouse platforms (like Snowflake, Redshift, Synapse), and lakehouse principles. Familiarity with infrastructure as code (Terraform, ARM templates) is a plus. Strong analytical and problem-solving skills with attention to detail. Excellent verbal and written communication skills.
Posted 1 week ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Hybrid
Roles & Responsibilities: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Collaborate with internal teams and clients to understand business requirements and translate them into robust technical solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Translate Business requirements into high-quality technical documents including, data mapping, data processes, and operational support guides. Work closely with architects, product managers and reporting team to collect functional and system requirements. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION Bachelors in engineering, MCA or equivalent.
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Title: Python Developer Job Summary: We are seeking a skilled Python Developer with expertise in data technologies to join our team. The ideal candidate will have a proven ability to design and implement processes for moving and transforming data across systems in both batch and real-time environments. This role requires advanced proficiency in Python, SQL, and JSON, along with hands-on experience in PySpark technologies and Azure cloud services. Responsibilities: ∙Design, develop, and implement scalable data pipelines for batch and real-time processing. ∙ Collaborate with data engineering and analytics teams to understand data requirements and deliver effective solutions. ∙ Optimize ETL/ELT workflows using Python and PySpark to handle large-scale datasets. ∙Write advanced SQL queries to process, transform, and analyze data efficiently. ∙ Handle JSON data for integration, serialization, and deserialization in distributed systems. ∙Utilize Azure cloud services to deploy, manage, and maintain data solutions. ∙Troubleshoot and resolve issues in data pipelines and workflows Requirements: ∙Proven experience in designing and implementing data integration processes. ∙Advanced proficiency in Python, SQL, and JSON. ∙ Hands-on experience with PySpark technologies and distributed computing frameworks. ∙Practical expertise in Azure cloud services (e.g., Azure Data Lake, Azure Synapse etc.). ∙Strong problem-solving and analytical skills. ∙Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications: Familiarity with additional cloud platforms (GCP or AWS) is a plus. ∙ Experience with CI/CD tools and data versioning. ∙Knowledge of data modeling and big data technologies Job Types: Full-time, Permanent Pay: Up to ₹1,800,000.00 per year Benefits: Health insurance Provident Fund Application Question(s): Are you serving notice period at your current organization? Education: Bachelor's (Required) Experience: overall: 4 years (Required) Location: Chennai, Tamil Nadu (Required) Work Location: In person
Posted 1 week ago
0 years
5 - 12 Lacs
Cochin
Remote
At impress.ai our mission is to make accurate hiring easier. We combine I/O Psychology with AI to create application screening processes that allow each and every candidate to undergo a structured interview. While candidates benefit from the enhanced experience, recruiters benefit from the AI-enabled automation. Launched in 2017, impress.ai is a no-code, self-service platform that is highly focused on simplifying and accelerating various parts of the recruitment workflow. Our co-founders observed problems in hiring processes at several companies before building impress.ai. They noticed challenges in candidate experience as well as recruiters having a tough time with a large scale of hiring, the variety of roles, and handling various systems. After immense research, they found a solution to the power of AI and intelligent automation. The Job: We are looking for a Senior Data Analyst at impress.ai you will be responsible for working on all aspects of data and analytics on the impress.ai platform. This ranges from providing analytics support, to maintaining the data pipeline as well as research and development of AI/ML algorithms that are to be implemented in the platform. Responsibilities: Work closely with stakeholders to identify issues related to business and use data to propose solutions for effective decision making Build Algorithms and design experiments Write well-designed, maintainable and performant code that adheres to impress.ai coding styles, conventions and standards Use Machine learning and statistical techniques to provide solutions to problems Develop interactive dashboards and visualizations (Metabase, Looker Studio, Power BI). Manage ETL pipelines using PySpark, AWS Glue, and Step Functions. Processing, cleansing, and verifying the integrity of data used for analysis Enhancing data collection procedures to include information that is relevant for building analytic systems Communicate actionable insights using data, often for a non-technical audience. Work in cross functional teams with product managers, Software Engineers. designers, QA and Ops teams to achieve business objectives. Recruit and train a team of Junior Data Analysts You Bring to the Table: Proficient in Python and SQL for data manipulation and analysis. Experienced in multi page dashboard building (Metabase, Looker Studio or Power BI), data storytelling Strong in advanced SQL, cross-dialect querying, stored procedures, and data privacy best practices Experience with Jupyter notebooks for data exploration and documentation Experience with NLP tasks such as text and sentiment analysis Strong understanding of statistical techniques (e.g., regression, distributions, statistical tests) and their application Knowledge of with PySpark, Pandas, and AWS services like Glue, Athena, S3, Step Functions, and DMS for large-scale ETL workflows Knowledge of machine learning and deep learning techniques and their practical trade-offs Skilled in prompt engineering for LLMs (e.g., ChatGPT, Claude), with experience in RAG, Agentic AI, fine-tuning, and building scalable, secure GenAI applications Excellent problem-solving and analytical skills Effective in communicating your data as a story and the ability to influence stakeholders Effective written and verbal communicator; experienced in cross-functional collaboration Ability to document and communicate technical requirements clearly Familiar with Agile methodology, Jira, Git, and version control systems Curious and self-driven with a passion for exploring new algorithms and tools Proficient in using software engineering tools for scalable and maintainable development Our Benefits: Work with cutting-edge technologies like Machine Learning, AI, and NLP and learn from the experts in their fields in a fast-growing international SaaS startup. As a young business, we have a strong culture of learning and development. Join our discussions, brown bag sessions, and research-oriented sessions. A work environment where you are given the freedom to develop to your full potential and become a trusted member of the team. Opportunity to contribute to the success of a fast-growing, market-leading product. Work is important, and so is your personal well-being. The work culture at impress.ai is designed to ensure a healthy balance between the two. Diversity and Inclusion are more than just words for us. We are committed to providing a respectful, safe, and inclusive workplace. Diversity at impress.ai means fostering a workplace in which individual differences are recognized, appreciated, and respected in ways that fully develop and utilize each person’s talents and strengths. We pride ourselves on working with the best and we know our company runs on the hard work and dedication of our talented team members. Besides having employee-friendly policies and benefit schemes, impress.ai assures unbiased pay purely based on performance. Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Internet reimbursement Provident Fund Work from home
Posted 1 week ago
15.0 years
2 - 8 Lacs
Hyderābād
On-site
Job Description Who we are looking for This is a hands-on Databricks Senior Developer position in State Street Global Technology Services. We are looking for candidate with good knowledge on Bigdata technology and strong development experience with Databricks. You will be managing the Databricks platform for the application with enhancements, performance improvements, implementation of AI/ML use cases and leading the team. What you will be responsible for As Databricks Sr. Developer you will Design & Develop custom high throughput and configurable frameworks/libraries Ability to drive change through collaboration, influence and demonstration of POCs Responsible for all aspects of the software development lifecycle, including design, coding, integration testing, deployment, and documentation Work collaboratively within an agile project team Ensure best practices and coding standards are followed in the team Technically provide mentoring to the team Leading and Managing the ETL team. What we value These skills will help you succeed in this role Experience performing data analysis and data exploration Experience working in an agile delivery environment Hands on development experience on Java is big plus Exposure/understanding of DevOps best practice and CICD (i.e. Jenkins) Experience working in a multi-developer environment, using version control (i.e. Git) Strong knowledge on Databricks SQL/Pyspark - Data engineering pipeline Strong Experience in Unix, Python and complex SQL Strong critical thinking, communication, and problem-solving skills Strong hands-on experience in troubleshooting DevOps pipelines and AWS services Education & Preferred Qualifications Bachelor’s Degree level qualification in a computer or IT related subject 15+ years of overall Bigdata data pipeline experience 8+ years of Databricks hands on experience 8+ years of experience on cloud-based development including AWS Services
Posted 1 week ago
0 years
10 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities include, but not limited to: Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 week ago
7.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Job Description Job Description for Consultant - Data Engineer Key Responsibilities and Core Competencies: You will be responsible for managing and delivering multiple Pharma projects. Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. Good collaboration with onshore and Senior folks. Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). Exposure to Project management methodologies including Agile and Waterfall. Experience working in RFPs would be a plus. Required Technical Skills: Proficient in Python, Pyspark, SQL Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. Good experience in ETL, Data Modelling, building ETL Pipelines. Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: Bachelor’s or master’s Engineering/ MCA or equivalent degree. 7-9 years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills. Location Preferably Hyderabad, India About Us Chryselys is a US based Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. Chryselys was founded in the heart of US Silicon Valley in November 2019 with the vision of delivering high-value business consulting, solutions, and services to clients in the healthcare and life sciences space. We are trusted partners for organizations that seek to achieve high-impact transformations and reach their higher-purpose mission. Chryselys India supports our global clients to achieve high-impact transformations and reach their higher-purpose mission. Please visit https://linkedin.com/company/chryselys/mycompany https://chryselys.com for more details.
Posted 1 week ago
8.0 - 13.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 8 to 13 years of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with bigdata technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on bigdata processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and standard processes. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Hyderābād
On-site
Job Description – Sr. Data Engineer Roles & Responsibilities : We are looking for a Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . Integration of data from multiple sources or vendors to provide the holistic insights from data. You are expected to build and manage Data warehouse solutions, designing data models, creating ETL processes, implementing data quality mechanisms etc. Performs EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. Should have experience in client interaction. Experience in mentoring juniors and providing required guidance. Required Technical Skills Extensive experience in Python, Pyspark, SQL . Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Snowflake database. Must have strong hands-on experience in Azure and its services . Must be proficient in Databricks, Redshift, ADF etc. Hands-on experience in cloud services like Azure , AWS- S3, Glue, Lambda, CloudWatch, Athena. Sound knowledge in end-to-end Data management, data ops, quality and data governance. Knowledge of SFDC, Waterfall/ Agile methodology. Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications Bachelor’s or master’s Engineering/ MCA or equivalent degree. 5-7 years of relevant industry experience as Data Engineer . Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills.
Posted 1 week ago
2.0 - 6.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France