Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
This role is for one of the Weekday's clients Min Experience: 4 years Location: Ahmedabad JobType: full-time We are seeking a highly skilled Senior Database Administrator with 5-8 years of experience in data engineering and database management. The ideal candidate will have a strong foundation in data architecture, modeling, and pipeline orchestration. Hands-on experience with modern database technologies and exposure to generative AI tools in production environments will be a significant advantage. This role involves leading efforts to streamline data workflows, improve automation, and deliver high-impact insights across the organization. Requirements Key Responsibilities: Design, develop, and manage scalable and efficient data pipelines (ETL/ELT) across multiple database systems. Architect and maintain high-availability, secure, and scalable data storage solutions. Utilize generative AI tools to automate data workflows and enhance system capabilities. Collaborate with engineering, analytics, and data science teams to fulfill data requirements and optimize data delivery. Implement and monitor data quality standards, governance practices, and compliance protocols. Document data architectures, systems, and processes for transparency and maintainability. Apply data modeling best practices to support optimal storage and querying performance. Continuously research and integrate emerging technologies to advance the data infrastructure. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 5-8 years of experience in database administration and data engineering for large-scale systems. Proven experience in designing and managing relational and non-relational databases. Mandatory Skills: SQL - Proficient in advanced queries, performance tuning, and database management. NoSQL - Experience with at least one NoSQL database such as MongoDB, Cassandra, or CosmosDB. Hands-on experience with at least one of the following cloud data warehouses: Snowflake, Redshift, BigQuery, or Microsoft Fabric. Cloud expertise - Strong experience with Azure and its data services. Working knowledge of Python for scripting and data processing (e.g., Pandas, PySpark). Experience with ETL tools such as Apache Airflow, Microsoft Fabric, Informatica, or Talend. Familiarity with generative AI tools and their integration into data pipelines. Preferred Skills & Competencies: Deep understanding of database performance, tuning, backup, recovery, and security. Strong knowledge of data governance, data quality management, and metadata handling. Experience with Git or other version control systems. Familiarity with AI/ML-driven data solutions is a plus. Excellent problem-solving skills and the ability to resolve complex database issues. Strong communication skills to collaborate with cross-functional teams and stakeholders. Demonstrated ability to manage projects and mentor junior team members. Passion for staying updated with the latest trends and best practices in database and data engineering technologies. Show more Show less
Posted 4 days ago
0 years
0 Lacs
India
On-site
Company Description ThreatXIntel is a startup cyber security company dedicated to providing customized, affordable solutions to protect businesses and organizations from cyber threats. Our services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We take a proactive approach to security, continuously monitoring and testing our clients' digital environments to identify vulnerabilities before they can be exploited. Role Description We are looking for a freelance Data Engineer with strong experience in PySpark and AWS data services, particularly S3 and Redshift . The ideal candidate will also have some familiarity with integrating or handling data from Salesforce . This role focuses on building scalable data pipelines, transforming large datasets, and enabling efficient data analytics and reporting. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark for large-scale data processing. Manage data ingestion, storage, and transformation across AWS S3 and Redshift . Design data flows and schemas to support reporting, analytics, and business intelligence needs. Perform incremental loads, partitioning, and performance tuning in distributed environments. Extract and integrate relevant datasets from Salesforce for downstream processing. Ensure data quality, consistency, and availability for analytics teams. Collaborate with data analysts, platform engineers, and business stakeholders. Required Skills: Strong hands-on experience with PySpark for large-scale distributed data processing. Proven track record working with AWS S3 (data lake) and Amazon Redshift (data warehouse). Ability to write complex SQL queries for transformation and reporting. Basic understanding or experience integrating data from Salesforce (APIs or exports). Experience with performance optimization, partitioning strategies, and efficient schema design. Knowledge of version control and collaborative development tools (e.g., Git). Nice to Have: Experience with AWS Glue or Lambda for orchestration. Familiarity with Salesforce objects, SOQL, or ETL tools like Talend, Informatica, or Airflow. Understanding of data governance and security best practices in cloud environments. Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: We are looking for an experienced and motivated Senior / Lead Talend Developer to join our data engineering team. The ideal candidate will possess deep technical expertise in Talend ETL , SQL , and data integration concepts. This role requires a balanced combination of hands-on development and team leadership , making it ideal for someone who can lead by example while contributing as an individual contributor. Key Responsibilities: Design, develop, and deploy ETL workflows using Talend to extract, transform, and load data from various sources. Write optimized SQL queries for data analysis, transformation, and validation. Act as a technical lead , guiding and mentoring a team of developers while managing project deliverables. Perform code reviews , provide best practice recommendations, and ensure adherence to data standards and governance policies. Collaborate with business analysts, data architects, and stakeholders to understand data requirements and translate them into scalable solutions. Troubleshoot and resolve technical issues in ETL processes and data pipelines. Ensure high availability and performance of data processes in production. Maintain comprehensive documentation of data flows, processes, and architecture. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 8+ years of experience in ETL development with at least 6 years hands-on in Talend (Talend Open Studio, Talend Data Integration, or Talend Cloud). Strong proficiency in SQL with the ability to handle large volumes of data across relational databases. Proven experience working as a team lead or senior developer , with leadership over junior developers. Ability to manage multiple tasks, prioritize deliverables, and work effectively in a fast-paced environment. Solid understanding of data warehousing, data integration patterns, and performance optimization. Strong communication skills – both written and verbal. Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
India
Remote
Job Title: Data Engineer Location: Remote Experience :5+Years Job Summary: We are seeking a highly skilled Data Engineer with strong experience in ETL development, data replication, and cloud data integration to join our remote team. The ideal candidate will be proficient in Talend , have hands-on experience with IBM Data Replicator and Qlik Replicate , and demonstrate deep knowledge of Snowflake architecture, CDC processes, and data transformation scripting. Key Responsibilities: Design, develop, and maintain robust ETL pipelines using Talend integrated with Snowflake . Implement and manage real-time data replication solutions using IBM Data Replicator and Qlik Replicate . Work with complex data source systems including DB2 (containerized and traditional) , Oracle , and Hadoop . Model and manage slowly changing dimensions ( Type 2 SCD ) in Snowflake. Optimize data pipelines for scalability, reliability, and performance. Design and implement Change Data Capture (CDC) strategies to support real-time and incremental data flows. Write efficient and maintainable code in SQL , Python , or Shell to support data transformations and automation. Collaborate with data architects, analysts, and other engineers to support data-driven initiatives. Required Skills & Qualifications: Strong proficiency in Talend ETL development and integration with Snowflake . Practical experience with IBM Data Replicator and Qlik Replicate . In-depth understanding of Snowflake architecture and Type 2 SCD data modeling. Familiarity with containerized environments and various data sources such as DB2 , Oracle , and Hadoop . Experience implementing CDC and real-time data replication patterns. Proficiency in SQL , Python , and Shell scripting . Excellent problem-solving and communication skills. Self-motivated and comfortable working independently in a fully remote environment. Preferred Qualifications: Snowflake certification or Talend certification. Experience working in an Agile or DevOps environment. Familiarity with data governance and data quality best practices. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
India
Remote
Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : Data Testing Engineer Exp : 8+ years Location : Hyderabad and Gurgaon (Hybrid) Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Senior Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications Master’s or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Nic Show more Show less
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description At GlobalLogic, we are passionate about encouraging a culture of ground-breaking and excellence. As an Automation Tester, you will be part of an exceptionally hard-working team that is dedicated to delivering world-class solutions. This is your chance to work on brand new projects in an environment that values creativity and values your contributions. Requirements Mandatory skills – Automation testing with Java Selenium, Manual/Functional testing, API Testing, Rest assured, BDD Framework, Cucumber , Java core concepts, service bus automation Optional -selenium with Java, Azure,cosmos db Participate in the Business Requirement/Elaboration meeting and defining the process of epic, future, capabilities and user stories with business team and product owners to adding them to backlog, and define the acceptance criteria for the stories Estimate the scope and size of the testing effort for each user story. This estimated effort would be part of overall estimation for each sprints. Also, re-plan the upcoming sprints effort estimation based on the previous sprints. Creating the test plan, test strategy and test script in JIRA tool based on the requirement gathered from business teams and reviewed with product owners and development teams. Job responsibilities Perform functional, Automation and product integration testing for developed applications in AngularJS, NodeJS, ReactJS, Microsoft Azure Microservices, Talend, MS SQL server and Cosmos databases. Testing will be conducted with automation tools Selenium, Cucumber, Android studio and device testing done manually. Execute the tests for every cycle, via scheduled automated method or manually based on the test environment availability. Perform Service Oriented Architecture testing at the early stage of development using tools like SOAPUI and Postman. Manage defects and follow up with the build, business partners till its fixed and closed based on the business requirements and expectation from the business team. Review and validate test results and defect reports based on the outstanding defects as per defect SLA. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
India
Remote
Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Remote, India Duration: Long Term Contract Need with 10+ years of experience Must have Skills : • Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. • 10 + years of proven experience with SQL, schema design, and dimensional data modeling • Solid knowledge of data warehouse best practices, development standards, and methodologies • Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. • Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. • Be an independent self-learner with a “let’s get this done” approach and the ability to work in Fast paced and Dynamic environment. • Excellent communication and teamwork abilities. Nice-to-Have Skills: • Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. • SAP ECC /S/4 and Hana knowledge. • Intermediate knowledge on Power BI • Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less
Posted 5 days ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 5 days ago
5.0 - 10.0 years
13 - 18 Lacs
Gurugram
Work from Office
Position Summary To be a technology expert architecting solutions and mentoring people in BI / Reporting processes with prior expertise in the Pharma domain. Job Responsibilities o Technology Leadership – Lead guide the team independently or with little support to design, implement deliver complex reporting and BI project assignments. o Technical portfolio – Expertise in a range of BI and hosting technologies like the AWS stack (Redshift, EC2), Qlikview, QlikSense, Tableau, Microstrategy, Spotfire o Project Management – Get accurate briefs from the Client and translate into tasks for team members with priorities and timeline plans. Must maintain high standards of quality and thoroughness. Should be able to monitor accuracy and quality of others' work. Ability to think in advance about potential risks and mitigation plans. o Logical Thinking – Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Must be able to guide team members in analysis. o Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience - Minimum of 5 years of relevant experience in Pharma domain. - Technical: Should have 10+ years of hands on experience in the following tools: Must have working knowledge of toolsAtleast 2 of the following – Qlikview, QlikSense, Tableau, Microstrategy, Spotfire/ (Informatica, SSIS, Talend & metallion)/ Big Data technologies - Hadoop ecosystem. Aware of techniques such asUI design, Report modeling, performance tuning and regression testing Basic expertise with MS excel Advanced expertise with SQL - Functional: Should have experience in following concepts and technologies: Specifics: Pharma data sources like IMS, Veeva, Symphony, Cegedim etc. Business processes like alignment, market definition, segmentation, sales crediting, activity metrics calculation Calculation of all sales, activity and managed care KPIs Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Project Management Attention to P&L Impact Capability Building / Thought Leadership Scale of revenues managed / delivered
Posted 5 days ago
5.0 - 10.0 years
11 - 15 Lacs
Gurugram
Work from Office
Position Summary This is the Requisition for Employee Referrals Campaign and JD is Generic. We are looking for Associates with 5+ years of experience in delivering solutions around Data Engineering, Big data analytics and data lakes, MDM, BI, and data visualization. Experienced to Integrate and standardize structured and unstructured data to enable faster insights using cloud technology. Enabling data-driven insights across the enterprise. Job Responsibilities He/she should be able to design implement and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development – Expertise in any of the following skills. Any ETL tools (Informatica, Talend, Matillion, Data Stage), andhosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any BI toolsamong Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Managementis must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management – Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship – Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Should have excellent communication skills. Education Bachelor of Technology Master's Equivalent - Engineering Work Experience Overall, 5- 7years of relevant experience inData Warehousing, Data management projects with some experience in the Pharma domain. We are hiring for following roles across Data management tech stacks - ETL toolsamong Informatica, IICS/Snowflake,Python& Matillion and other Cloud ETL. BI toolsamong Power BI and Tableau. MDM - Informatica/ Raltio, Customer Data Management. Azure cloud Developer using Data Factory and Databricks Data Modeler-Modelling of data - understanding source data, creating data models for landing, integration. Python/PySpark -Spark/ PySpark Design, Development, and Deployment
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred Technical And Professional Experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks Show more Show less
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Pune, Maharashtra, India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Note: Only candidates with up to 30 days official notice period will be considered. If shortlisted, we will reach out via WhatsApp and email – please respond promptly. Work Type: Full-time | On-site Compensation (Yearly): INR(₹) 1,200,000 to 2,400,000 Working Hours: Standard Business Hours Location: Bengaluru / Gurugram / Nagpur Notice Period: Max 30 days About The Client A technology-driven product engineering company focused on embedded systems, connected devices, and Android platform development. Known for working with top-tier OEMs on innovative, mission-critical projects. About The Role We are hiring a skilled Data Engineer (FME) to develop, automate, and support data transformation pipelines that handle complex spatial and non-spatial datasets. This role requires hands-on expertise in FME workflows, spatial data validation, PostGIS, and Python scripting, with the ability to support dashboards and collaborate across tech and ops teams. Must-Have Qualifications Bachelor’s degree in Engineering (B.E./B.Tech.) 4–8 years of experience in data integration or ETL development Proficient in building FME workflows for data transformation Strong skills in PostgreSQL/PostGIS and spatial data querying Ability to write validation and transformation logic in Python or SQL Experience handling formats like GML, Shapefile, GeoJSON, and GPKG Familiarity with coordinate systems and geometry validation (e.g., EPSG:27700) Working knowledge of cron jobs, logging, and scheduling automation Preferred Tools & Technologies ETL/Integration: FME, Python, Talend (optional) Spatial DB: PostGIS, Oracle Spatial GIS Tools: QGIS, ArcGIS Scripting: Python, SQL Formats: CSV, JSON, GPKG, XML, Shapefiles Workflow Tools: Jira, Git, Confluence Key Responsibilities The role involves designing and automating ETL pipelines using FME, applying custom transformers, and scripting in Python for data validation and transformation. It requires working with spatial data in PostGIS, fixing geometry issues, and ensuring alignment with required coordinate systems. The engineer will also support dashboard integrations by creating SQL views and tracking processing metadata. Additional responsibilities include implementing automation through FME Server, cron jobs, and CI/CD pipelines, as well as collaborating with analysts and operations teams to translate business rules, interpret validation reports, and ensure compliance with LA and HMLR specifications. Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Note: Only candidates with up to 30 days official notice period will be considered. If shortlisted, we will reach out via WhatsApp and email – please respond promptly. Work Type: Full-time | On-site Compensation (Yearly): INR(₹) 1,200,000 to 2,400,000 Working Hours: Standard Business Hours Location: Bengaluru / Gurugram / Nagpur Notice Period: Max 30 days About The Client A technology-driven product engineering company focused on embedded systems, connected devices, and Android platform development. Known for working with top-tier OEMs on innovative, mission-critical projects. About The Role We are hiring a skilled Data Engineer (FME) to develop, automate, and support data transformation pipelines that handle complex spatial and non-spatial datasets. This role requires hands-on expertise in FME workflows, spatial data validation, PostGIS, and Python scripting, with the ability to support dashboards and collaborate across tech and ops teams. Must-Have Qualifications Bachelor’s degree in Engineering (B.E./B.Tech.) 4–8 years of experience in data integration or ETL development Proficient in building FME workflows for data transformation Strong skills in PostgreSQL/PostGIS and spatial data querying Ability to write validation and transformation logic in Python or SQL Experience handling formats like GML, Shapefile, GeoJSON, and GPKG Familiarity with coordinate systems and geometry validation (e.g., EPSG:27700) Working knowledge of cron jobs, logging, and scheduling automation Preferred Tools & Technologies ETL/Integration: FME, Python, Talend (optional) Spatial DB: PostGIS, Oracle Spatial GIS Tools: QGIS, ArcGIS Scripting: Python, SQL Formats: CSV, JSON, GPKG, XML, Shapefiles Workflow Tools: Jira, Git, Confluence Key Responsibilities The role involves designing and automating ETL pipelines using FME, applying custom transformers, and scripting in Python for data validation and transformation. It requires working with spatial data in PostGIS, fixing geometry issues, and ensuring alignment with required coordinate systems. The engineer will also support dashboard integrations by creating SQL views and tracking processing metadata. Additional responsibilities include implementing automation through FME Server, cron jobs, and CI/CD pipelines, as well as collaborating with analysts and operations teams to translate business rules, interpret validation reports, and ensure compliance with LA and HMLR specifications. Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Hyderābād
On-site
Job Summary: We are looking for an experienced Data Engineer with 4+ years of proven expertise in building scalable data pipelines, integrating complex datasets, and working with cloud-based and big data technologies. The ideal candidate should have hands-on experience with data modeling, ETL processes, and real-time data streaming. Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines and ETL workflows. Work with large datasets from various sources, ensuring data quality and consistency. Collaborate with Data Scientists, Analysts, and Software Engineers to support data needs. Optimize data systems for performance, scalability, and reliability. Implement data governance and security best practices. Troubleshoot data issues and identify improvements in data processes. Automate data integration and reporting tasks. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in data engineering or similar roles . Strong programming skills in Python , SQL , and Shell scripting . Experience with ETL tools (e.g., Apache Airflow, Talend, AWS Glue). Proficiency in data modeling , data warehousing , and database design . Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services like S3, Redshift, BigQuery, Snowflake . Experience with big data technologies such as Spark, Hadoop, Kafka . Strong understanding of data structures, algorithms , and system design . Familiarity with CI/CD tools , version control (Git), and Agile methodologies. Preferred Skills: Experience with real-time data streaming (Kafka, Spark Streaming). Knowledge of Docker , Kubernetes , and infrastructure-as-code tools like Terraform . Exposure to machine learning pipelines or data science workflows is a plus. Interested candidates can send their resume Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 5 days ago
0 years
4 - 7 Lacs
Gurgaon
On-site
A Software Engineer is curious and self-driven to build and maintain multi-terabyte operational marketing databases and integrate them with cloud technologies. Our databases typically house millions of individuals and billions of transactions and interact with various web services and cloud-based platforms. Once hired, the qualified candidate will be immersed in the development and maintenance of multiple database solutions to meet global client business objectives Job Description: Key responsibilities: Have 2 – 4 yrs exp Will work in close Supervision of Tech Leads/ Lead Devs Should able to understand detailed design with minimal explanation. Individual Contributor. Resource will able to perform mid to complex level tasks with minimal supervision. Senior team members will peer review assigned tasks. Build and configure our Marketing Database/Data environment platform by integrating feeds as per detailed design/transformation logic. Good knowledge of Unix scripting &/or Python Must have strong knowledge in SQL Good understanding of ETL (Talend, Informatica, Datastage, Ab Initio etc) as well as database skills (Oracle, SQL server, Teradata, Vertica, redshift, Snowflake, Big query, Azure DW etc). Fair understanding of relational databases, stored procs etc. Experience in Cloud computing (one or more of AWS, Azure, GCP) will be plus. Less supervision & guidance from senior resources will be required. Location: DGS India - Gurugram - Golf View Corporate Towers Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Company Gentrack provides leading utilities across the world with innovative cleantech solutions. The global pace of change is accelerating, and utilities need to rebuild for a more sustainable future. Working with some of the world’s biggest energy and water companies, as well as innovative challenger brands, we are helping companies reshape what it means to be a utilities business. We are driven by our passion to create positive impact. That is why utilities rely on us to drive innovation, deliver great customer experiences, and secure profits. Together, we are renewing utilities. Our Values and Culture Colleagues at Gentrack are one big team, working together to drive efficiency in two of the planet’s most precious resources, energy, and water. We are passionate people who want to drive change through technology and believe in making a difference. Our values drive decisions and how we interact and communicate with customers, partners, shareholders, and each other. Our core values are~ Ø Respect for the planet Ø Respect for our customers and Ø Respect for each other Gentrackers are a group of smart thinkers and dedicated doers. We are a diverse team who love our work and the people we work with and who collaborate and inspire each other to deliver creative solutions that make our customers successful. We are a team that shares knowledge, asks questions, raises the bar, and are expert advisers. At Gentrack we care about doing honest business that is good for not just customers but families, communities, and ultimately the planet. Gentrackers continuously look for a better way and drive quality into everything they do. This is a truly exciting time to join Gentrack with a clear growth strategy and a world class leadership team working to fulfil Gentrack’s global aspirations by having the most talented people, an inspiring culture, and a technology first, people centric business. The Opportunity We are seeking an experienced Data Migration Manager to lead our global data migration practice and drive successful delivery of complex data migrations in our Customers transformation projects. The Data Migration Manager will be responsible for overseeing the strategic planning, execution, and management of data migration initiatives across our global software implementation projects. This critical role ensures seamless data transition, quality, and integrity for our clients. In line with our value of ‘Respect for the Planet’, we encourage all our people to provide leadership through participating in our sustainability initiatives, including activities ran by the regional GSTF. Including encouraging our people to engage and drive sustainable behaviours, supporting organisational change and global sustainability programs. The Specifics Lead and manage a global team of data migration experts, providing strategic direction and professional development Develop and maintain comprehensive data migration methodologies and best practices applicable to utility sector software implementations Design and implement robust data migration strategies that address the unique challenges of utility industry data ecosystems Collaborate with solution architects, project managers, and client teams to define detailed data migration requirements and approaches Provide guidance and advice across the entire data migration lifecycle, including~ Source data assessment and profiling Data cleansing and transformation strategies Migration planning and risk mitigation Execution of migration scripts and processes Validation, reconciliation and quality assurance of migrated data Ensure compliance with data protection regulations and industry-specific standards across different global regions Develop and maintain migration toolsets and accelerators to improve efficiency and repeatability of migration processes Create comprehensive documentation, migration playbooks, and standard operating procedures Conduct regular performance reviews of migration projects and implement continuous improvement initiatives Manage and mitigate risks associated with complex data migration projects Provide technical leadership and mentorship to the data migration team What we're looking for (you don’t need to be a guru at all, we’re looking forward to coaching and collaborating with you)~ Proficiency in data migration tools (e.g., Informatica, Talend, Microsoft SSIS) Experience with customer information system (CIS) and/or billing system migrations Knowledge of data governance frameworks Understanding of utility industry data models and integration challenges Familiarity with cloud migration strategies, including Salesforce Strategic thinking and innovative problem-solving Strong leadership and team management capabilities Excellent written and verbal communication skills across technical and non-technical audiences Ability to oversee a number of complex, globally dispersed projects Cultural sensitivity and adaptability What we offer in return~ Personal growth – in leadership, commercial acumen and technical excellence To be part of a global, winning high growth organization – with a career path to match A vibrant, culture full of people passionate about transformation and making a difference -with a one team, collaborative ethos A competitive reward package that truly awards our top talent A chance to make a true impact on society and the planet Gentrack want to work with the best people, no matter their background. So, if you are passionate about learning new things and keen to join the mission, you will fit right in. Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position: Database Location: Noida, India www.SEW.ai Who We Are: SEW, with its innovative and industry-leading cloud platforms, delivers the best Digital Customer Experiences (CX) and Workforce Experiences (WX), powered by AI, ML, and IoT Analytics to the global energy, water, and gas providers. At SEW, the vision is to Engage, Empower, and Educate billions of people to save energy and water. We partner with businesses to deliver platforms that are easy-to-use, integrate seamlessly, and help build a strong technology foundation that allows them to become future- ready. Searching for your dream job? We are a true global company that values building meaningful relationships and maintaining a passionate work environment while fostering innovation and creativity. At SEW, we firmly believe that each individual contributes to our success and in return, we provide opportunities from them to learn new skills and build a rewarding professional career. A Couple of Pointers: • We are the fastest growing company with over 420+ clients and 1550+ employees. • Our clientele is based out in the USA, Europe, Canada, Australia, Asia Pacific, Middle East • Our platforms engage millions of global users, and we keep adding millions every month. • We have been awarded 150+ accolades to date. Our clients are continually awarded by industry analysts for implementing our award-winning product. • We have been featured by Forbes, Wall Street Journal, LA Times for our continuous innovation and excellence in the industry. Who we are looking? An ideal candidate who must demonstrate in-depth knowledge and understanding of RDBMS concepts and experienced in writing complex queries and data integration processes in SQL/TSQL and NoSQL. T his individual will be responsible for helping the design, development and implementation of new and existing applications. Roles and Responsibilities: • Reviews the existing database design and data management procedures and provides recommendations for improvement • Responsible for providing subject matter expertise in design of database schemes and performing data modeling (logical and physical models), for product feature enhancements as well as extending analytical capabilities. •Develop technical documentation as needed. • Architect, develop, validate and communicate Business Intelligence (BI) solutions like dashboards, reports, KPIs, instrumentation, and alert tools. • Define data architecture requirements for cross-product integration within and across cloud-based platforms. • Analyze, architect, develop, validate and support integrating data into the SEW platform from external data source; Files (XML, CSV, XLS, etc.), APIs (REST, SOAP), RDBMS. • Perform thorough analysis of complex data and recommend actionable strategies. • Effectively translate data modeling and BI requirements into the design process. • Big Data platform design i.e. tool selection, data integration, and data preparation for predictive modeling • Required Skills: • Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience in Extraction, Transformation and Loading ETL work using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developer in Oracle, MS SQL or other enterprise database with focus on building data integration processes. • Candidate should have any NoSql technology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with Big Data platforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding data warehousing concepts and decision support systems. • Ability to deal with sensitive and confidential material and adhere to worldwide data security and Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skill Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Tamil Nadu, India
On-site
Job Title: Data Engineer About VXI VXI Global Solutions is a BPO leader in customer service, customer experience, and digital solutions. Founded in 1998, the company has 40,000 employees in more than 40 locations in North America, Asia, Europe, and the Caribbean. We deliver omnichannel and multilingual support, software development, quality assurance, CX advisory, and automation & process excellence to the world’s most respected brands. VXI is one of the fastest growing, privately held business services organizations in the United States and the Philippines, and one of the few US-based customer care organizations in China. VXI is also backed by private equity investor Bain Capital. Our initial partnership ran from 2012 to 2016 and was the beginning of prosperous times for the company. During this period, not only did VXI expand our footprint in the US and Philippines, but we also gained ground in the Chinese and Central American markets. We also acquired Symbio, expanding our global technology services offering and enhancing our competitive position. In 2022, Bain Capital re-invested in the organization after completing a buy-out from Carlyle. This is a rare occurrence in the private equity space and shows the level of performance VXI delivers for our clients, employees, and shareholders. With this recent investment, VXI has started on a transformation to radically improve the CX experience though an industry leading generative AI product portfolio that spans hiring, training, customer contact, and feedback. Job Description: We are seeking talented and motivated Data Engineers to join our dynamic team and contribute to our mission of harnessing the power of data to drive growth and success. As a Data Engineer at VXI Global Solutions, you will play a critical role in designing, implementing, and maintaining our data infrastructure to support our customer experience and management initiatives. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. This is an exciting opportunity to work with cutting-edge technologies and shape the future of data-driven decision-making at VXI Global Solutions. Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and store data from various sources. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data models and schemas to support analytics, reporting, and machine learning initiatives. Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness. Ensure data quality and integrity by implementing data validation, monitoring, and error handling mechanisms. Collaborate with data analysts and data scientists to provide them with clean, reliable, and accessible data for analysis and modeling. Stay current with emerging technologies and best practices in data engineering and recommend innovative solutions to enhance our data capabilities. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven 8+ years' experience as a data engineer or similar role Proficiency in SQL, Python, and/or other programming languages for data processing and manipulation. Experience with relational and NoSQL databases (e.g., SQL Server, MySQL, Postgres, Cassandra, DynamoDB, MongoDB, Oracle), data warehousing (e.g., Vertica, Teradata, Oracle Exadata, SAP Hana), and data modeling concepts. Strong understanding of distributed computing frameworks (e.g., Apache Spark, Apache Flink, Apache Storm) and cloud-based data platforms (e.g., AWS Redshift, Azure, Google BigQuery, Snowflake) Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker, Apache Superset) and data pipeline tools (e.g. Airflow, Kafka, Data Flow, Cloud Data Fusion, Airbyte, Informatica, Talend) is a plus. Understanding of data and query optimization, query profiling, and query performance monitoring tools and techniques. Solid understanding of ETL/ELT processes, data validation, and data security best practices Experience in version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams. Join VXI Global Solutions and be part of a dynamic team dedicated to driving innovation and delivering exceptional customer experiences. Apply now to embark on a rewarding career in data engineering with us! Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2