Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will require you to balance technical oversight with team management, fostering an environment of innovation and collaboration. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: - Candidate must have cloud knowledge preferred AWS - Must have coding experience in Pythonand Spark framework - Mandatory SQL knowledge - Good to have exposure to CI/CD, Docker containers - Strong Verbal and written communication - Strong analytical and problem-solving skills. Additional Information:- The candidate should have minimum 7.5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required.- Candidates to be flexible to work in rotational shifts. Qualification 15 years full time education
Posted 5 days ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Palantir Foundry Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Project Role:Lead Data Engineer Project Role Description:Design, build and enhance applications to meet business process and requirements in Palantir foundry.Work experience:Minimum 6 years Must have Skills: Palantir Foundry, PySparkGood to Have Skills: Experience in PySpark, python and SQLKnowledge on Big Data tools & TechnologiesOrganizational and project management experience.Job Requirements & Key Responsibilities:Responsible for designing, developing, testing, and supporting data pipelines and applications on Palantir foundry.Configure and customize Workshop to design and implement workflows and ontologies.Collaborate with data engineers and stakeholders to ensure successful deployment and operation of Palantir foundry applications.Work with stakeholders including the product owner, data, and design teams to assist with data-related technical issues and understand the requirements and design the data pipeline.Work independently, troubleshoot issues and optimize performance.Communicate design processes, ideas, and solutions clearly and effectively to team and client. Assist junior team members in improving efficiency and productivity.Technical Experience:Proficiency in PySpark, Python and SQL with demonstrable ability to write & optimize SQL and spark jobs.Hands on experience on Palantir foundry related services like Data Connection, Code repository, Contour, Data lineage & Health checks.Good to have working experience with workshop, ontology, slate.Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry.Experience in ingesting data from different external source systems using data connections and sync.Good Knowledge on Spark Architecture and hands on experience on performance tuning & code optimization.Proficient in managing both structured and unstructured data, with expertise in handling various file formats such as CSV, JSON, Parquet, and ORC.Experience in developing and managing scalable architecture & managing large data sets.Good understanding of data loading mechanism and adeptly implement strategies for capturing CDC.Nice to have test driven development and CI/CD workflows.Experience in version control software such as Git and working with major hosting services (e. g. Azure DevOps, GitHub, Bitbucket, Gitlab).Implementing code best practices involves adhering to guidelines that enhance code readability, maintainability, and overall quality.Educational Qualification:15 years of full-term education Qualification 15 years full time education
Posted 5 days ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in risk and compliance focus on maintaining regulatory compliance and managing risks for clients, providing advice, and solutions. They help organisations navigate complex regulatory landscapes and enhance their internal controls to mitigate risks effectively. Those in enterprise risk management at PwC will focus on identifying and mitigating potential risks that could impact an organisation's operations and objectives. You will be responsible for developing business strategies to effectively manage and navigate risks in a rapidly changing business environment. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the OFRO - QA team you will đảm bảo the quality and accuracy of dashboards and data workflows through meticulous testing and validation. As a Senior Associate, you will leverage your knowledge in data analysis and automation testing to mentor others, navigate complex testing environments, and uphold quality standards throughout the software development lifecycle. This position provides an exciting opportunity to work with advanced BI tools and contribute to continuous improvement initiatives in a dynamic team setting. Key Responsibilities ETL Development & Data Engineering Design, build, and maintain scalable ETL pipelines using Azure Data Factory, Databricks, and custom Python scripts. Integrate and ingest data from on-prem, cloud, and third-party APIs into modern data platforms. Perform data cleansing, validation, and transformation to ensure data quality and consistency. Machine learning experience is desirable. Programming and Scripting Write robust and reusable Python scripts for data processing, automation, and orchestration. Develop complex SQL queries for data extraction, transformation, and reporting. Optimize code for performance, scalability, and maintainability. Cloud & Platform Integration Work within Azure ecosystems, including Blob Storage, SQL Database, ADF, Synapse, and Key Vault. Use Databricks (PySpark/Delta Lake) for advanced transformations and big data processing. Good to have PowerBI hands-on. Collaboration And Communication Work closely with cross-functional teams to ensure quality throughout the software development lifecycle. Provide regular status updates and test results to stakeholders. Participate in daily stand-ups, sprint planning, and Agile ceremonies. Shift time : 2pm to 11pm IST Total exp required - 4-9 years
Posted 5 days ago
5.0 - 8.0 years
10 - 14 Lacs
Coimbatore
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the development process. Your role will be pivotal in fostering a collaborative environment that encourages innovation and efficiency in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to data processing.- Ability to troubleshoot and optimize performance of applications. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Coimbatore.- A 15 years full time education is required. Candidate should be ready to work in rotational shift Qualification 15 years full time education
Posted 5 days ago
6.0 - 8.0 years
4 - 6 Lacs
Kochi, Chennai, Coimbatore
Hybrid
Required Skills: 5+Years of experience Azure Databricks PySpark Azure Data Factory
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Customers & Products Job Family Group: Project Management Group Job Description: As bp transitions to a coordinated energy company, we must adapt to a changing world and maintain driven performance. bp’s customers & products (C&P) business area is setting up a business and technology centre (BTC) in CITY, COUNTRY . This will support the delivery of an enhanced customer experience and drive innovation by building global capabilities at scale, using technology, and developing deep expertise . The BTC will be a core and connected part of our business, bringing together colleagues who report into their respective part of C&P, working together with other functions across bp. This is an exciting time to join bp and the customers & products BTC! Job Title: Data Modeller SME Lead About the role: As the Data Modeller Senior SME for Castrol you will collaborate with business partners across Digital Operational Excellence, Technology, and Castrol’s PUs, HUBs, functions, and Markets to model and sustain curated datasets within the Castrol Data ecosystem. The role ensures agile, continuous improvement of curated datasets aligned with the Data Modelling Framework and supports analytics, data science, operational MI, and the broader Digital Business Strategy. On top of the Data lake we now have enabled the MLOPS environment (PySpark Pro) and Gurobi with direct connections to run the advance analytics and data science queries and algorithms written in python. This enables the data analyst and data science team to incubate insights in an agile way. The Data Modeller role will chip in and enable the growth trajectory on data science skills and capabilities within the role, the team and the wider Castrol data analyst/science community, data science experience is a plus but basic skills would suffice to start. Experience & Education: Education: Degree in an analytical field (preferably IT or engineering) or 5+ years of relevant experience Experience: Proven track record in delivering data models and curated datasets for major transformation projects. Broad understanding of multiple data domains and their integration points. Strong problem-solving and collaborative skills with a strategic approach. Skills & Competencies: Expertise in data modeling, data wrangling of highly complex, high-dimensional data (ER Studio, Gurobi, SageMaker PRO). Proficiency in translating analytical insights from high-dimensional data. Skilled in PowerBI data modeling and proof of concept design for data and analytics dashboarding. Proficiency in Data Science tools such as Python, Amazon SageMaker, GAMS, AMPL, ILOG, AIMMS, or similar. Ability to work across multiple levels of detail, including Analytics, MI, statistics, data, process design principles, operating model intent, and systems design. Strong influencing skills to use expertise and experience to shape value delivery. Demonstrated success in multi-functional deployments and performance optimization. BP Behaviors for Successful Delivery: Respect: Build trust through clear relationships Excellence : Apply standard processes and strive for executional completion One Team: Collaborate to improve team efficiency You will work with: You will be part of a 20 member Global Data & Analytics Team. You will operate peer to peer in a team of global seasoned experts on Process, Data, Advanced Analytics and Data Science. The Global Data & Analytics team reports into the Castrol Digital Enablement team that is managing the digital estate for Castrol where we enhance scalability, process and data integration. This D&A team is the driving force behind the Data & Analytics strategy managing the Harmonized Data Lake and the Business Intelligence derived from it, in support of the Business strategy and is a key pilar of value enablement through fast and accurate insights. As the Data Modeller SME lead you will be exposed to a wide variety of collaborators in all layers of the Castrol Leadership and our partners in GBS and Technology. Through Data Governance at Value centre you have great exposure to the operations and have the ability to influence and inspire change through value preposition engagements. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is not available for remote working Skills: Change control, Commissioning, start-up and handover, Conflict Management, Construction, Cost estimating and cost control (Inactive), Design development and delivery, Frameworks and methodologies, Governance arrangements, Performance management, Portfolio Management, Project and construction safety, Project execution planning, Project HSSE, Project Leadership, Project Team Management, Quality, Requirements Management, Reviews, Risk Management, Schedule and resources, Sourcing Management, Stakeholder Management, Strategy and business case, Supplier Relationship Management Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 5 days ago
4.0 - 9.0 years
9 - 19 Lacs
Noida, Hyderabad, Pune
Work from Office
Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential Responsibilities: Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager Build and maintain ETL/ELT pipelines for both batch and streaming data. Work with structured and unstructured datasets at scale. Apply Data Modeling principles and advanced SQL techniques. Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. Collaborate with product teams to understand requirements and deliver optimized data solutions. Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. Work independently with minimal supervision and strong ownership of deliverables. Must Have: 4+ years of experience in Data Engineering on AWS Cloud. Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer Professional (mandatory) Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimizatio
Posted 5 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Position : Senior Software Engineer - 7 to 10 years exp (Python and Golang) Work Mode - Remote Years of Experience: 7- 10years (5+ years exp in Python) Office Location - SB Road, Pune, Remote (for other locations) Qualifications – Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Responsibilities & Skills: ● Design, develop, and maintain high-quality software applications using Python and the Django framework. ● Collaborate with cross-functional teams to define, design, and ship new features and enhancements. ● Integrate third-party APIs (REST, SOAP, streaming services) into the existing product. ● Optimize application performance and ensure scalability and reliability. ● Write clean, maintainable, and efficient code, following best practices and coding standards. ● Participate in code reviews and provide constructive feedback to peers. ● Troubleshoot and debug applications, identifying root causes of issues. ● Stay current with industry trends, technologies, and best practices in software development. Required Skills (Python): ● Bachelor’s or Master’s degree in Computer Science or related field from IIT, NIT, or any other reputed institute. ● 3-10 years of experience in software development, with at least 4 years of background in Python and Django . ● Working knowledge of Golang (Mandatory) ● Experience integrating third-party APIs (REST, SOAP, streaming services) into applications. ● Familiarity with database technologies, particularly MySQL(must have) and HBase.(nice of have) ● Experience with message brokers like Kafka (must) , Rabbitmq and Redis ● Experience on Version control systems such as Github ● Familiarity with RESTful APIs and integration of third-party APIs. ● Strong understanding of software development methodologies, particularly Agile ● Demonstrable experience with writing unit and functional tests ● Excellent problem-solving skills and ability to work collaboratively in a team environment. ● Experience with database systems such as PostgreSQL, MySQL, or MongoDB. Good To Have: ● Experience with cloud infrastructure like AWS/GCP or other cloud service provider experience ● Knowledge on IEEE 2030.5 standard (Protocol) ● Knowledge on ● Serverless architecture, preferably AWS Lambda ● Experience with PySpark, Pandas, Scipy, Numpy libraries is a plus Experience in microservices architecture ● Solid CI/CD experience ● You are a Git guru and revel in collaborative workflows ● You work on the command line confidently and are familiar with all the goodies that the linux toolkit can provide ● Knowledge of modern authorization mechanisms, such as JSON Web Token ● Good to have front end technologies like - ReactJS, NodeJS
Posted 5 days ago
5.0 years
0 Lacs
Greater Ahmedabad Area
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 5 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 5 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
This role is for one of Weekday's clients Min Experience: 7 years Location: Gurugram JobType: full-time Requirements We are looking for an experienced Data Engineer with deep expertise in Azure and/or AWS Databricks to join our growing data engineering team. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing data pipelines, enabling seamless data integration and real-time analytics. This role is ideal for professionals who have hands-on experience with cloud-based data platforms, big data processing frameworks, and a strong understanding of data modeling, pipeline orchestration, and performance tuning. You will work closely with data scientists, analysts, and business stakeholders to deliver scalable and reliable data infrastructure that supports high-impact decision-making across the organization. Key Responsibilities: Design and Development of Data Pipelines: Design, develop, and maintain scalable and efficient data pipelines using Databricks on Azure or AWS. Integrate data from multiple sources including structured, semi-structured, and unstructured datasets. Implement ETL/ELT pipelines for both batch and real-time processing. Cloud Data Platform Expertise: Use Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, or similar services to build robust and secure data workflows. Optimize storage, compute, and processing costs using appropriate services within the cloud environment. Data Modeling & Governance: Build and maintain enterprise-grade data models, schemas, and lakehouse architecture. Ensure adherence to data governance, security, and privacy standards, including data lineage and cataloging. Performance Tuning & Monitoring: Optimize data pipelines and query performance through partitioning, caching, indexing, and memory management. Implement monitoring tools and alerts to proactively address pipeline failures or performance degradation. Collaboration & Documentation: Work closely with data analysts, BI developers, and data scientists to understand data requirements. Document all processes, pipelines, and data flows for transparency, maintainability, and knowledge sharing. Automation and CI/CD: Develop and maintain CI/CD pipelines for automated deployment of data pipelines and infrastructure using tools like GitHub Actions, Azure DevOps, or Jenkins. Implement data quality checks and unit tests as part of the data development lifecycle. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 7+ years of experience in data engineering roles with hands-on experience in Azure or AWS ecosystems. Strong expertise in Databricks (including notebooks, Delta Lake, and MLflow integration). Proficiency in Python and SQL; experience with PySpark or Spark strongly preferred. Experience with data lake architectures, data warehouse platforms (like Snowflake, Redshift, Synapse), and lakehouse principles. Familiarity with infrastructure as code (Terraform, ARM templates) is a plus. Strong analytical and problem-solving skills with attention to detail. Excellent verbal and written communication skills.
Posted 5 days ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Hybrid
Roles & Responsibilities: Design and implement end-to-end data engineering solutions by leveraging the full suite of Databricks, Fabric tools, including data ingestion, transformation, and modeling. Design, develop and maintain end-to-end data pipelines by using spark, ensuring scalability, reliability, and cost optimized solutions. Collaborate with internal teams and clients to understand business requirements and translate them into robust technical solutions. Conduct performance tuning and troubleshooting to identify and resolve any issues. Implement data governance and security best practices, including role-based access control, encryption, and auditing. Translate Business requirements into high-quality technical documents including, data mapping, data processes, and operational support guides. Work closely with architects, product managers and reporting team to collect functional and system requirements. Work in fast-paced environment and perform effectively in an agile development environment. REQUIREMENTS 8+ years of experience in designing and implementing data solutions with at least 4+ years of experience in data engineering. Extensive experience with Databricks, Fabric, including a deep understanding of its architecture, data modeling, and real-time analytics. Minimum 6+ years of experience in Spark, PySpark and Python. Must have strong experience in SQL, Spark SQL, data modeling & RDBMS concepts. Strong knowledge of Data Fabric services, particularly Data engineering, Data warehouse, Data factory, and Real- time intelligence. Strong problem-solving skills, with ability to perform multi-tasking. Familiarity with security best practices in cloud environments, Active Directory, encryption, and data privacy compliance. Communicate effectively in both oral and written. Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM). Preference given to current or former Labcorp employees. EDUCATION Bachelors in engineering, MCA or equivalent.
Posted 5 days ago
0.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Title: Python Developer Job Summary: We are seeking a skilled Python Developer with expertise in data technologies to join our team. The ideal candidate will have a proven ability to design and implement processes for moving and transforming data across systems in both batch and real-time environments. This role requires advanced proficiency in Python, SQL, and JSON, along with hands-on experience in PySpark technologies and Azure cloud services. Responsibilities: ∙Design, develop, and implement scalable data pipelines for batch and real-time processing. ∙ Collaborate with data engineering and analytics teams to understand data requirements and deliver effective solutions. ∙ Optimize ETL/ELT workflows using Python and PySpark to handle large-scale datasets. ∙Write advanced SQL queries to process, transform, and analyze data efficiently. ∙ Handle JSON data for integration, serialization, and deserialization in distributed systems. ∙Utilize Azure cloud services to deploy, manage, and maintain data solutions. ∙Troubleshoot and resolve issues in data pipelines and workflows Requirements: ∙Proven experience in designing and implementing data integration processes. ∙Advanced proficiency in Python, SQL, and JSON. ∙ Hands-on experience with PySpark technologies and distributed computing frameworks. ∙Practical expertise in Azure cloud services (e.g., Azure Data Lake, Azure Synapse etc.). ∙Strong problem-solving and analytical skills. ∙Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications: Familiarity with additional cloud platforms (GCP or AWS) is a plus. ∙ Experience with CI/CD tools and data versioning. ∙Knowledge of data modeling and big data technologies Job Types: Full-time, Permanent Pay: Up to ₹1,800,000.00 per year Benefits: Health insurance Provident Fund Application Question(s): Are you serving notice period at your current organization? Education: Bachelor's (Required) Experience: overall: 4 years (Required) Location: Chennai, Tamil Nadu (Required) Work Location: In person
Posted 5 days ago
0 years
5 - 12 Lacs
Cochin
Remote
At impress.ai our mission is to make accurate hiring easier. We combine I/O Psychology with AI to create application screening processes that allow each and every candidate to undergo a structured interview. While candidates benefit from the enhanced experience, recruiters benefit from the AI-enabled automation. Launched in 2017, impress.ai is a no-code, self-service platform that is highly focused on simplifying and accelerating various parts of the recruitment workflow. Our co-founders observed problems in hiring processes at several companies before building impress.ai. They noticed challenges in candidate experience as well as recruiters having a tough time with a large scale of hiring, the variety of roles, and handling various systems. After immense research, they found a solution to the power of AI and intelligent automation. The Job: We are looking for a Senior Data Analyst at impress.ai you will be responsible for working on all aspects of data and analytics on the impress.ai platform. This ranges from providing analytics support, to maintaining the data pipeline as well as research and development of AI/ML algorithms that are to be implemented in the platform. Responsibilities: Work closely with stakeholders to identify issues related to business and use data to propose solutions for effective decision making Build Algorithms and design experiments Write well-designed, maintainable and performant code that adheres to impress.ai coding styles, conventions and standards Use Machine learning and statistical techniques to provide solutions to problems Develop interactive dashboards and visualizations (Metabase, Looker Studio, Power BI). Manage ETL pipelines using PySpark, AWS Glue, and Step Functions. Processing, cleansing, and verifying the integrity of data used for analysis Enhancing data collection procedures to include information that is relevant for building analytic systems Communicate actionable insights using data, often for a non-technical audience. Work in cross functional teams with product managers, Software Engineers. designers, QA and Ops teams to achieve business objectives. Recruit and train a team of Junior Data Analysts You Bring to the Table: Proficient in Python and SQL for data manipulation and analysis. Experienced in multi page dashboard building (Metabase, Looker Studio or Power BI), data storytelling Strong in advanced SQL, cross-dialect querying, stored procedures, and data privacy best practices Experience with Jupyter notebooks for data exploration and documentation Experience with NLP tasks such as text and sentiment analysis Strong understanding of statistical techniques (e.g., regression, distributions, statistical tests) and their application Knowledge of with PySpark, Pandas, and AWS services like Glue, Athena, S3, Step Functions, and DMS for large-scale ETL workflows Knowledge of machine learning and deep learning techniques and their practical trade-offs Skilled in prompt engineering for LLMs (e.g., ChatGPT, Claude), with experience in RAG, Agentic AI, fine-tuning, and building scalable, secure GenAI applications Excellent problem-solving and analytical skills Effective in communicating your data as a story and the ability to influence stakeholders Effective written and verbal communicator; experienced in cross-functional collaboration Ability to document and communicate technical requirements clearly Familiar with Agile methodology, Jira, Git, and version control systems Curious and self-driven with a passion for exploring new algorithms and tools Proficient in using software engineering tools for scalable and maintainable development Our Benefits: Work with cutting-edge technologies like Machine Learning, AI, and NLP and learn from the experts in their fields in a fast-growing international SaaS startup. As a young business, we have a strong culture of learning and development. Join our discussions, brown bag sessions, and research-oriented sessions. A work environment where you are given the freedom to develop to your full potential and become a trusted member of the team. Opportunity to contribute to the success of a fast-growing, market-leading product. Work is important, and so is your personal well-being. The work culture at impress.ai is designed to ensure a healthy balance between the two. Diversity and Inclusion are more than just words for us. We are committed to providing a respectful, safe, and inclusive workplace. Diversity at impress.ai means fostering a workplace in which individual differences are recognized, appreciated, and respected in ways that fully develop and utilize each person’s talents and strengths. We pride ourselves on working with the best and we know our company runs on the hard work and dedication of our talented team members. Besides having employee-friendly policies and benefit schemes, impress.ai assures unbiased pay purely based on performance. Job Type: Full-time Pay: ₹500,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Internet reimbursement Provident Fund Work from home
Posted 5 days ago
15.0 years
2 - 8 Lacs
Hyderābād
On-site
Job Description Who we are looking for This is a hands-on Databricks Senior Developer position in State Street Global Technology Services. We are looking for candidate with good knowledge on Bigdata technology and strong development experience with Databricks. You will be managing the Databricks platform for the application with enhancements, performance improvements, implementation of AI/ML use cases and leading the team. What you will be responsible for As Databricks Sr. Developer you will Design & Develop custom high throughput and configurable frameworks/libraries Ability to drive change through collaboration, influence and demonstration of POCs Responsible for all aspects of the software development lifecycle, including design, coding, integration testing, deployment, and documentation Work collaboratively within an agile project team Ensure best practices and coding standards are followed in the team Technically provide mentoring to the team Leading and Managing the ETL team. What we value These skills will help you succeed in this role Experience performing data analysis and data exploration Experience working in an agile delivery environment Hands on development experience on Java is big plus Exposure/understanding of DevOps best practice and CICD (i.e. Jenkins) Experience working in a multi-developer environment, using version control (i.e. Git) Strong knowledge on Databricks SQL/Pyspark - Data engineering pipeline Strong Experience in Unix, Python and complex SQL Strong critical thinking, communication, and problem-solving skills Strong hands-on experience in troubleshooting DevOps pipelines and AWS services Education & Preferred Qualifications Bachelor’s Degree level qualification in a computer or IT related subject 15+ years of overall Bigdata data pipeline experience 8+ years of Databricks hands on experience 8+ years of experience on cloud-based development including AWS Services
Posted 5 days ago
0 years
10 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities include, but not limited to: Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 5 days ago
7.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Job Description Job Description for Consultant - Data Engineer Key Responsibilities and Core Competencies: You will be responsible for managing and delivering multiple Pharma projects. Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. Good collaboration with onshore and Senior folks. Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). Exposure to Project management methodologies including Agile and Waterfall. Experience working in RFPs would be a plus. Required Technical Skills: Proficient in Python, Pyspark, SQL Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. Good experience in ETL, Data Modelling, building ETL Pipelines. Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: Bachelor’s or master’s Engineering/ MCA or equivalent degree. 7-9 years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills. Location Preferably Hyderabad, India About Us Chryselys is a US based Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. Chryselys was founded in the heart of US Silicon Valley in November 2019 with the vision of delivering high-value business consulting, solutions, and services to clients in the healthcare and life sciences space. We are trusted partners for organizations that seek to achieve high-impact transformations and reach their higher-purpose mission. Chryselys India supports our global clients to achieve high-impact transformations and reach their higher-purpose mission. Please visit https://linkedin.com/company/chryselys/mycompany https://chryselys.com for more details.
Posted 5 days ago
8.0 - 13.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 8 to 13 years of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with bigdata technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on bigdata processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and standard processes. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
5.0 - 7.0 years
0 Lacs
Hyderābād
On-site
Job Description – Sr. Data Engineer Roles & Responsibilities : We are looking for a Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . Integration of data from multiple sources or vendors to provide the holistic insights from data. You are expected to build and manage Data warehouse solutions, designing data models, creating ETL processes, implementing data quality mechanisms etc. Performs EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. Should have experience in client interaction. Experience in mentoring juniors and providing required guidance. Required Technical Skills Extensive experience in Python, Pyspark, SQL . Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Snowflake database. Must have strong hands-on experience in Azure and its services . Must be proficient in Databricks, Redshift, ADF etc. Hands-on experience in cloud services like Azure , AWS- S3, Glue, Lambda, CloudWatch, Athena. Sound knowledge in end-to-end Data management, data ops, quality and data governance. Knowledge of SFDC, Waterfall/ Agile methodology. Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications Bachelor’s or master’s Engineering/ MCA or equivalent degree. 5-7 years of relevant industry experience as Data Engineer . Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills.
Posted 5 days ago
2.0 - 6.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role we seek a skilled Data Engineer to build and optimize our data infrastructure. As a key contributor, you will collaborate closely with cross-functional teams to design and implement robust data pipelines that efficiently extract, transform, and load data into our AWS-based data lake and data warehouse. Your expertise will be instrumental in empowering data-driven decision making through advanced analytics and predictive modeling. Roles & Responsibilities: Building and optimizing data pipelines, data warehouses, and data lakes on the AWS and Databricks platforms. Managing and maintaining the AWS and Databricks environments. Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring. Maintain system uptime and optimal performance Working closely with cross-functional teams to understand business requirements and translate them into technical solutions. Exploring and implementing new tools and technologies to enhance ETL platform performance. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 6 years. Functional Skills: Must-Have Skills: Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and columnar data stores. Proven ability to optimize query performance on big data platforms. Proficient in leveraging Python, PySpark, and Airflow to build scalable and efficient data ingestion, transformation, and loading processes. Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark, Apache Airflow Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Experienced with AWS, GCP or Azure cloud services What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
1.0 years
4 - 8 Lacs
Bengaluru
On-site
Department Command Centre Job posted on Jul 28, 2025 Employee Type Permanent Experience range (Years) 1 year - 2 years Job Description : Data Analyst Location: Bengaluru Job Summary: We are seeking a highly motivated Data Analyst to join our growing team. This role involves analyzing complex datasets, generating actionable insights, and supporting data-driven decision-making across various business functions. You will work closely with cross-functional teams to help optimize performance and improve business outcomes. Key Responsibilities: Work closely with big data engineering teams on data availability and quality. Analyze large and complex datasets to identify trends, patterns, and actionable insights. Track KPIs and performance metrics to support operational and strategic decision-making. Translate business needs into data analysis problems and deliver clear, actionable insights. Conduct root cause analysis on business challenges using structured data approaches. Communicate data insights effectively through presentations and reports. Identify gaps in data and opportunities for process automation. Develop and maintain documentation of reports, dashboards, and analytics processes. Qualifications: Bachelor’s degree in engineering, Statistics, Computer Science, Business, Economics, or a related field. 1-2+ years of professional experience in a Data Analyst or Business Analyst role. Proficiency in SQL is mandatory. Experience with Python (Pandas, Numpy)/R for data analysis is mandatory. Strong Excel/Google Sheets skills. Experience with data visualization tools (Tableau, Power BI, Looker, or Superset) is an additionalplus. Basic understanding of statistical methods (descriptive stats, hypothesis testing). Knowledge of PySpark is an additional plus. Key Skills: Analytical Thinking & Problem-Solving Communication & Presentation Skills Data Storytelling Attention to Detail
Posted 5 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Workmode: Hybrid work location : PAN INDIA Work Timing : 2 Pm to 11 PM Primary Skill : Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Secondary Skills Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. Qualifications Must Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK.
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description: As part of the client's DP&E Platform Observability team, you'll help build a top-tier monitoring platform focused on latency, traffic, errors, and saturation. You'll design, develop, and maintain a scalable, reliable platform, improving MTTR/MTTX, creating dashboards, and optimizing costs. Experience with large systems, monitoring tools (Prometheus, Grafana, etc.), and cloud platforms (AWS, Azure, GCP) is ideal. The focus is a centralized observability source for data-driven decisions and faster incident response. Skills Required: Spring Boot, Angular, Cloud Computing Skills Preferred: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required: 5+ years of overall experience with proficiency in Java, angular or any javascript technology with experience in designing and deploying cloud-based data pipelines and microservices using GCP tools like BigQuery, Dataflow, and Dataproc. Ability to leverage best in-class data platform technologies (Apache Beam, Kafka,...) to deliver platform features, and design & orchestrate platform services to deliver data platform capabilities. Service-Oriented Architecture and Microservices: Strong understanding of SOA, microservices, and their application within a cloud data platform context. Develop robust, scalable services using Java Spring Boot, Python, Angular, and GCP technologies. Full-Stack Development: Knowledge of front-end and back-end technologies, enabling collaboration on data access and visualization layers (e.g., React, Node.js). Design and develop RESTful APIs for seamless integration across platform services. Implement robust unit and functional tests to maintain high standards of test coverage and quality. Database Management: Experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases, as well as columnar databases like BigQuery. Data Governance and Security: Understanding of data governance frameworks and implementing RBAC, encryption, and data masking in cloud environments. CI/CD and Automation: Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform, and automation frameworks. Manage code changes with GitHub and troubleshoot and resolve application defects efficiently. Ensure adherence to SDLC best practices, independently managing feature design, coding, testing, and production releases. Problem-Solving: Strong analytical skills with the ability to troubleshoot complex data platform and microservices issues. Experience Preferred: GCP Data Engineer, GCP Professional Cloud Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 5 days ago
0 years
6 - 7 Lacs
Noida
On-site
Posted On: 27 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description Key skills needed are: 1. SQL, 2. Python, 3.. Pyspark and Machine Learning. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - Amazon Machine Learning Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication Big Data - Big Data - Pyspark Database - Database Programming - SQL Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough