Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 11.0 years
6 - 9 Lacs
Noida
On-site
Snowflake - Senior Technical Lead Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 1 month ago
7.0 years
3 - 9 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications: Master’s or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Nic
Posted 1 month ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: ETL Talend Lead Location: Bangalore, Hyderabad, Chennai, Pune Work Mode: Hybrid Job Type: Full-Time Shift Timings: 2:00 - 11:00 PM Years Of Experience: 8 - 15 years ETL Development Lead: Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Insightsoftware (ISW) is a growing, dynamic computer software company that helps businesses achieve greater levels of financial intelligence across their organization with our world-class financial reporting solutions. At insightsoftware, you will learn and grow in a fast-paced, supportive environment that will take your career to the next level. The Data Conversion Specialist is a member of the insightsoftware Project Management Office (PMO) who demonstrates teamwork, results orientation, a growth mindset, disciplined execution, and a winning attitude. Location: Hyderabad (Work from Office) Working Hours: 5:00 PM - 2:00AM IST or 6:00 PM to 3:00 AM IS T, should be ok to work in night shift as per requirement. Position Summary The Consultant will integrate and map customer data from client source system(s) to our industry-leading platform. The role will include, but is not limited to: Using strong technical data migration, scripting, and organizational skills to ensure the client data is converted efficiently and accurately to the insightsoftware (ISW) platform. Performing extract, transform, load (ETL) activities to ensure accurate and timely data conversions. Providing in-depth research and analysis of complex scenarios to develop innovative solutions to meet customer needs whilst remaining within project governance. Mapping and maintaining business requirements to the solution design using tools such as requirements traceability matrices (RTM). Presenting findings, requirements, and problem statements for ratification by stakeholders and working groups. Identifying and documenting data gaps to allow change impact and downstream impact analysis to be conducted. Qualifications Experience assessing data and analytic requirements to establish mapping rules from source to target systems to meet business objectives. Experience with real-time, batch, and ETL for complex data conversions. Working knowledge of extract, transform, load (ETL) methodologies and tools such as Talend, Dell Boomi, etc. Utilize data mapping tools to prepare data for data loads based on target system specifications. Working experience using various data applications/systems such as Oracle SQL, Excel, .csv files, etc. Strong SQL scripting experience. Communicate with clients and/or ISW Project Manager to scope, develop, test, and implement conversion/integration Effectively communicate with ISW Project Managers and customers to keep project on target Continually drive improvements in the data migration process. Collaborate via phone and email with clients and/or ISW Project Manager throughout the conversion/integration process. Demonstrated collaboration and problem-solving skills. Working knowledge of software development lifecycle (SDLC) methodologies including, but not limited to: Agile, Waterfall, and others. Clear understanding of cloud and application integrations. Ability to work independently, prioritize tasks, and manage multiple tasks simultaneously. Ensure client’s data is converted/integrated accurately and within deadlines established by ISW Project Manager. Experience in customer SIT, UAT, migration and go live support. Additional Information All your information will be kept confidential according to EEO guidelines. ** At this time insightsoftware is not able to offer sponsorship to candidates who are not eligible to work in the country where the position is located . ** insightsoftware About Us: Hear From Our Team - InsightSoftware (wistia.com) Background checks are required for employment with insightsoftware, where permitted by country, state/province. At insightsoftware, we are committed to equal employment opportunity regardless of race, color, ethnicity, ancestry, religion, national origin, gender, sex, gender identity or expression, sexual orientation, age, citizenship, marital or parental status, disability, veteran status, or other class protected by applicable law. We are proud to be an equal opportunity workplace. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less
Posted 1 month ago
10.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Experience :- 10 - 15 Years. Job Description Lead execution of the assigned projects & responsible for end to end execution Lead, guide and support the design and implementation of targeted strategies including identification of change impacts to people, process, policy, and structure, stakeholder identification and alignment, appropriate communication and feedback loops, success measures, training, organizational readiness, and long-term sustainability Manage the day-to-day activities, including scope, financials (e.g. business case, budget), resourcing (e.g. Full-time employees, roles and responsibilities, utilization), timelines and toll gates and risks Implement project review and quality assurance to ensure successful execution of goals and stakeholder satisfaction Consistently report and review progress to the Program Lead, Steering group and relevant stakeholders Will involve in more than one projects or will work across a portfolio of projects Identify improvement and efficiency opportunities across the projects Analyze data, evaluate results, and develop recommendations and road maps across multiple workstreams Build and maintain effective partnerships with key cross functional leaders and project team members across functions such as Finance & Technology Experience Experience of working as a Project Manager/ Scrum Master as a service provider (not in internal projects) Knowledge of functional supply chain and planning processes, including ERP/MRP, capacity planning, and managing planning activities with contract manufacturers - Good to have. Experience in implementing ERP systems such as SAP and Oracle - good to have. Not mandatory. Experience in systems integration and ETL tools such as Informatica and Talend a plus Experience with data mapping and systems integration a plus Functional knowledge of supply chain or after sales service operations a plus Outstanding drive, excellent interpersonal skills and the ability to communicate effectively, both verbally and in writing, and to immediately contribute in a team environment An ability to prioritize and perform well in a fast-paced environment, while maintaining a high level of client focus Demonstrable track record of delivery and impact in managing/delivering transformation, with minimum 6-9 years’ experience in project management & business transformation Experience in managing Technology Projects(data analysis, visualization, app development etc) along with atleast in one function such as Procurement Domain, process improvement, continuous improvement, change management, operating model design Has performed the role of a scrum master or managed a project having scrum teams Has managed projects with stakeholders in multi-location landscape Past experience in managing analytics projects will be a huge plus Education Understanding & application of Agile and waterfall methodology Exposure to tools and applications such as Microsoft Project, Jira, Confluence, PowerBI, Alteryx Understanding of Lean Six Sigma Preferably a post graduate - MBA though not mandatory Expectation Excellent interpersonal (communication and presentation) and organizational skills · Problem solving abilities and a can-do attitude Confident, proactive self-starters, comfortable in managing and engaging others Effective in engaging, partnering with and influencing stakeholders across the matrix up to VP level Ability to move fluidly between big picture and detail always keeping the end goal in mind Inclination toward collaborative partnership, and able to help establish/be part of high performing teams for impact Highly diligent with close eye for detail. Delivers quality outputs Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 1 month ago
4.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Power Bi and AAS expert (Strong SC or Specialist Senior) Should have hands-on experience of Data Modelling in Azure SQL Data Warehouse and Azure Analysis Service Should be able to write and test Dex queries. Should be able generate Paginated Reports in Power BI Should have minimum 3 Years working experience in delivering projects in Power Bi Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.
Posted 1 month ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
This role is for one of the Weekday's clients Min Experience: 4 years Location: Ahmedabad JobType: full-time We are seeking a highly skilled Senior Database Administrator with 5-8 years of experience in data engineering and database management. The ideal candidate will have a strong foundation in data architecture, modeling, and pipeline orchestration. Hands-on experience with modern database technologies and exposure to generative AI tools in production environments will be a significant advantage. This role involves leading efforts to streamline data workflows, improve automation, and deliver high-impact insights across the organization. Requirements Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines (ETL/ELT) across multiple database systems. Architect and maintain high-availability, secure, and scalable data storage solutions. Utilize generative AI tools to automate data workflows and enhance system capabilities. Collaborate with engineering, analytics, and data science teams to fulfill data requirements and optimize data delivery. Implement and monitor data quality standards, governance practices, and compliance protocols. Document data architectures, systems, and processes for transparency and maintainability. Apply data modeling best practices to support optimal storage and querying performance. Continuously research and integrate emerging technologies to advance the data infrastructure. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 5-8 years of experience in database administration and data engineering for large-scale systems. Proven experience in designing and managing relational and non-relational databases. Mandatory Skills: SQL - Proficient in advanced queries, performance tuning, and database management. NoSQL - Experience with at least one NoSQL database such as MongoDB, Cassandra, or CosmosDB. Hands-on experience with at least one of the following cloud data warehouses: Snowflake, Redshift, BigQuery, or Microsoft Fabric. Cloud expertise - Strong experience with Azure and its data services. Working knowledge of Python for scripting and data processing (e.g., Pandas, PySpark). Experience with ETL tools such as Apache Airflow, Microsoft Fabric, Informatica, or Talend. Familiarity with generative AI tools and their integration into data pipelines. Preferred Skills & Competencies: Deep understanding of database performance, tuning, backup, recovery, and security. Strong knowledge of data governance, data quality management, and metadata handling. Experience with Git or other version control systems. Familiarity with AI/ML-driven data solutions is a plus. Excellent problem-solving skills and the ability to resolve complex database issues. Strong communication skills to collaborate with cross-functional teams and stakeholders. Demonstrated ability to manage projects and mentor junior team members. Passion for staying updated with the latest trends and best practices in database and data engineering technologies. Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
On-site
Company Description ThreatXIntel is a startup cyber security company dedicated to providing customized, affordable solutions to protect businesses and organizations from cyber threats. Our services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We take a proactive approach to security, continuously monitoring and testing our clients' digital environments to identify vulnerabilities before they can be exploited. Role Description We are looking for a freelance Data Engineer with strong experience in PySpark and AWS data services, particularly S3 and Redshift . The ideal candidate will also have some familiarity with integrating or handling data from Salesforce . This role focuses on building scalable data pipelines, transforming large datasets, and enabling efficient data analytics and reporting. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark for large-scale data processing. Manage data ingestion, storage, and transformation across AWS S3 and Redshift . Design data flows and schemas to support reporting, analytics, and business intelligence needs. Perform incremental loads, partitioning, and performance tuning in distributed environments. Extract and integrate relevant datasets from Salesforce for downstream processing. Ensure data quality, consistency, and availability for analytics teams. Collaborate with data analysts, platform engineers, and business stakeholders. Required Skills: Strong hands-on experience with PySpark for large-scale distributed data processing. Proven track record working with AWS S3 (data lake) and Amazon Redshift (data warehouse). Ability to write complex SQL queries for transformation and reporting. Basic understanding or experience integrating data from Salesforce (APIs or exports). Experience with performance optimization, partitioning strategies, and efficient schema design. Knowledge of version control and collaborative development tools (e.g., Git). Nice to Have: Experience with AWS Glue or Lambda for orchestration. Familiarity with Salesforce objects, SOQL, or ETL tools like Talend, Informatica, or Airflow. Understanding of data governance and security best practices in cloud environments. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for an experienced and motivated Senior / Lead Talend Developer to join our data engineering team. The ideal candidate will possess deep technical expertise in Talend ETL , SQL , and data integration concepts. This role requires a balanced combination of hands-on development and team leadership , making it ideal for someone who can lead by example while contributing as an individual contributor. Key Responsibilities: Design, develop, and deploy ETL workflows using Talend to extract, transform, and load data from various sources. Write optimized SQL queries for data analysis, transformation, and validation. Act as a technical lead , guiding and mentoring a team of developers while managing project deliverables. Perform code reviews , provide best practice recommendations, and ensure adherence to data standards and governance policies. Collaborate with business analysts, data architects, and stakeholders to understand data requirements and translate them into scalable solutions. Troubleshoot and resolve technical issues in ETL processes and data pipelines. Ensure high availability and performance of data processes in production. Maintain comprehensive documentation of data flows, processes, and architecture. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 8+ years of experience in ETL development with at least 6 years hands-on in Talend (Talend Open Studio, Talend Data Integration, or Talend Cloud). Strong proficiency in SQL with the ability to handle large volumes of data across relational databases. Proven experience working as a team lead or senior developer , with leadership over junior developers. Ability to manage multiple tasks, prioritize deliverables, and work effectively in a fast-paced environment. Solid understanding of data warehousing, data integration patterns, and performance optimization. Strong communication skills – both written and verbal. Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Senior Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications Master’s or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Nic Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description At GlobalLogic, we are passionate about encouraging a culture of ground-breaking and excellence. As an Automation Tester, you will be part of an exceptionally hard-working team that is dedicated to delivering world-class solutions. This is your chance to work on brand new projects in an environment that values creativity and values your contributions. Requirements Mandatory skills – Automation testing with Java Selenium, Manual/Functional testing, API Testing, Rest assured, BDD Framework, Cucumber , Java core concepts, service bus automation Optional -selenium with Java, Azure,cosmos db Participate in the Business Requirement/Elaboration meeting and defining the process of epic, future, capabilities and user stories with business team and product owners to adding them to backlog, and define the acceptance criteria for the stories Estimate the scope and size of the testing effort for each user story. This estimated effort would be part of overall estimation for each sprints. Also, re-plan the upcoming sprints effort estimation based on the previous sprints. Creating the test plan, test strategy and test script in JIRA tool based on the requirement gathered from business teams and reviewed with product owners and development teams. Job responsibilities Perform functional, Automation and product integration testing for developed applications in AngularJS, NodeJS, ReactJS, Microsoft Azure Microservices, Talend, MS SQL server and Cosmos databases. Testing will be conducted with automation tools Selenium, Cucumber, Android studio and device testing done manually. Execute the tests for every cycle, via scheduled automated method or manually based on the test environment availability. Perform Service Oriented Architecture testing at the early stage of development using tools like SOAPUI and Postman. Manage defects and follow up with the build, business partners till its fixed and closed based on the business requirements and expectation from the business team. Review and validate test results and defect reports based on the outstanding defects as per defect SLA. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
India
Remote
Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Remote, India Duration: Long Term Contract Need with 10+ years of experience Must have Skills : • Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. • 10 + years of proven experience with SQL, schema design, and dimensional data modeling • Solid knowledge of data warehouse best practices, development standards, and methodologies • Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. • Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. • Be an independent self-learner with a “let’s get this done” approach and the ability to work in Fast paced and Dynamic environment. • Excellent communication and teamwork abilities. Nice-to-Have Skills: • Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. • SAP ECC /S/4 and Hana knowledge. • Intermediate knowledge on Power BI • Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less
Posted 1 month ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 1 month ago
4.0 - 9.0 years
0 - 1 Lacs
Bengaluru
Hybrid
Design and implement highly scalable ELK (ElasticSearch, Logstash, and Kibana) stack and ElastiCache solutions Grafana: Create different visualization and dashboards according to the Client needs Experience of scripting languages like JavaScript, Python, PowerShell, etc. Should be able to work with API, shards etc. in Elasticsearch. Architecting data structures using Elastic Search and ElastiCache Query languages and writing complex queries with joins that deals with a large amount of data End to end Low-level design, development, administration, and delivery of ELK based reporting solutions Strong exposure to writing talend queries. Elastic query for data Analysis Creating Elasticsearch index templates. Index life cycle management Managing and monitoring Elasticsearch cluster Experience with Analyzers & Shards Experience in solving performance issues on large set of data indexes Strong expertise in Python scripting Strong experience in installing and configuring ELK on bare metal and clouds (GCP, AWS & AZURE) Strong experience in using Elastic search Indices, Elastic search APIs, Kibana Dashboards, Log stash and Log Beats Good experience in using or creating plugins for ELK like authentication and authorization plugins Good experience in enhancing Open-source ELK for custom capabilities Experience in provisioning automation frameworks such as Kubernetes or docker Experience working with JSON
Posted 1 month ago
5.0 - 10.0 years
13 - 18 Lacs
Gurugram
Work from Office
Position Summary To be a technology expert architecting solutions and mentoring people in BI / Reporting processes with prior expertise in the Pharma domain. Job Responsibilities o Technology Leadership – Lead guide the team independently or with little support to design, implement deliver complex reporting and BI project assignments. o Technical portfolio – Expertise in a range of BI and hosting technologies like the AWS stack (Redshift, EC2), Qlikview, QlikSense, Tableau, Microstrategy, Spotfire o Project Management – Get accurate briefs from the Client and translate into tasks for team members with priorities and timeline plans. Must maintain high standards of quality and thoroughness. Should be able to monitor accuracy and quality of others' work. Ability to think in advance about potential risks and mitigation plans. o Logical Thinking – Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Must be able to guide team members in analysis. o Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience - Minimum of 5 years of relevant experience in Pharma domain. - Technical: Should have 10+ years of hands on experience in the following tools: Must have working knowledge of toolsAtleast 2 of the following – Qlikview, QlikSense, Tableau, Microstrategy, Spotfire/ (Informatica, SSIS, Talend & metallion)/ Big Data technologies - Hadoop ecosystem. Aware of techniques such asUI design, Report modeling, performance tuning and regression testing Basic expertise with MS excel Advanced expertise with SQL - Functional: Should have experience in following concepts and technologies: Specifics: Pharma data sources like IMS, Veeva, Symphony, Cegedim etc. Business processes like alignment, market definition, segmentation, sales crediting, activity metrics calculation Calculation of all sales, activity and managed care KPIs Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Project Management Attention to P&L Impact Capability Building / Thought Leadership Scale of revenues managed / delivered
Posted 1 month ago
5.0 - 10.0 years
11 - 15 Lacs
Gurugram
Work from Office
Position Summary This is the Requisition for Employee Referrals Campaign and JD is Generic. We are looking for Associates with 5+ years of experience in delivering solutions around Data Engineering, Big data analytics and data lakes, MDM, BI, and data visualization. Experienced to Integrate and standardize structured and unstructured data to enable faster insights using cloud technology. Enabling data-driven insights across the enterprise. Job Responsibilities He/she should be able to design implement and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development – Expertise in any of the following skills. Any ETL tools (Informatica, Talend, Matillion, Data Stage), andhosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any BI toolsamong Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Managementis must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management – Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship – Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Should have excellent communication skills. Education Bachelor of Technology Master's Equivalent - Engineering Work Experience Overall, 5- 7years of relevant experience inData Warehousing, Data management projects with some experience in the Pharma domain. We are hiring for following roles across Data management tech stacks - ETL toolsamong Informatica, IICS/Snowflake,Python& Matillion and other Cloud ETL. BI toolsamong Power BI and Tableau. MDM - Informatica/ Raltio, Customer Data Management. Azure cloud Developer using Data Factory and Databricks Data Modeler-Modelling of data - understanding source data, creating data models for landing, integration. Python/PySpark -Spark/ PySpark Design, Development, and Deployment
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Pune, Maharashtra, India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 month ago
3.0 - 8.0 years
11 - 21 Lacs
Pune
Work from Office
Hiring for Denodo Admin with 3+ years experience with below skills: Must Have: - Denodo admin logical data models, views & caching - ETL pipelines (Informatica/Talend) for EDW/data lakes, performance issues - SQL, Informatica, Talend, Big Data, Hive Required Candidate profile - Design, develop & maintain ETL pipelines using Informatica PowerCenter or Talend to extract, Hive - Optimize & troubleshoot complex SQL queries - Immediate Joiner is plus - Work from office is must
Posted 1 month ago
4.0 - 9.0 years
9 - 18 Lacs
Pune, Gurugram
Work from Office
The first Data Engineer specializes in traditional ETL with SAS DI and Big Data (Hadoop, Hive). The second is more versatile, skilled in modern data engineering with Python, MongoDB, and real-time processing.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19400 Jobs | Bengaluru
Accenture in India
15955 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11280 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France