Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Note: Only candidates with up to 30 days official notice period will be considered. If shortlisted, we will reach out via WhatsApp and email – please respond promptly. Work Type: Full-time | On-site Compensation (Yearly): INR(₹) 1,200,000 to 2,400,000 Working Hours: Standard Business Hours Location: Bengaluru / Gurugram / Nagpur Notice Period: Max 30 days About The Client A technology-driven product engineering company focused on embedded systems, connected devices, and Android platform development. Known for working with top-tier OEMs on innovative, mission-critical projects. About The Role We are hiring a skilled Data Engineer (FME) to develop, automate, and support data transformation pipelines that handle complex spatial and non-spatial datasets. This role requires hands-on expertise in FME workflows, spatial data validation, PostGIS, and Python scripting, with the ability to support dashboards and collaborate across tech and ops teams. Must-Have Qualifications Bachelor’s degree in Engineering (B.E./B.Tech.) 4–8 years of experience in data integration or ETL development Proficient in building FME workflows for data transformation Strong skills in PostgreSQL/PostGIS and spatial data querying Ability to write validation and transformation logic in Python or SQL Experience handling formats like GML, Shapefile, GeoJSON, and GPKG Familiarity with coordinate systems and geometry validation (e.g., EPSG:27700) Working knowledge of cron jobs, logging, and scheduling automation Preferred Tools & Technologies ETL/Integration: FME, Python, Talend (optional) Spatial DB: PostGIS, Oracle Spatial GIS Tools: QGIS, ArcGIS Scripting: Python, SQL Formats: CSV, JSON, GPKG, XML, Shapefiles Workflow Tools: Jira, Git, Confluence Key Responsibilities The role involves designing and automating ETL pipelines using FME, applying custom transformers, and scripting in Python for data validation and transformation. It requires working with spatial data in PostGIS, fixing geometry issues, and ensuring alignment with required coordinate systems. The engineer will also support dashboard integrations by creating SQL views and tracking processing metadata. Additional responsibilities include implementing automation through FME Server, cron jobs, and CI/CD pipelines, as well as collaborating with analysts and operations teams to translate business rules, interpret validation reports, and ensure compliance with LA and HMLR specifications. Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Note: Only candidates with up to 30 days official notice period will be considered. If shortlisted, we will reach out via WhatsApp and email – please respond promptly. Work Type: Full-time | On-site Compensation (Yearly): INR(₹) 1,200,000 to 2,400,000 Working Hours: Standard Business Hours Location: Bengaluru / Gurugram / Nagpur Notice Period: Max 30 days About The Client A technology-driven product engineering company focused on embedded systems, connected devices, and Android platform development. Known for working with top-tier OEMs on innovative, mission-critical projects. About The Role We are hiring a skilled Data Engineer (FME) to develop, automate, and support data transformation pipelines that handle complex spatial and non-spatial datasets. This role requires hands-on expertise in FME workflows, spatial data validation, PostGIS, and Python scripting, with the ability to support dashboards and collaborate across tech and ops teams. Must-Have Qualifications Bachelor’s degree in Engineering (B.E./B.Tech.) 4–8 years of experience in data integration or ETL development Proficient in building FME workflows for data transformation Strong skills in PostgreSQL/PostGIS and spatial data querying Ability to write validation and transformation logic in Python or SQL Experience handling formats like GML, Shapefile, GeoJSON, and GPKG Familiarity with coordinate systems and geometry validation (e.g., EPSG:27700) Working knowledge of cron jobs, logging, and scheduling automation Preferred Tools & Technologies ETL/Integration: FME, Python, Talend (optional) Spatial DB: PostGIS, Oracle Spatial GIS Tools: QGIS, ArcGIS Scripting: Python, SQL Formats: CSV, JSON, GPKG, XML, Shapefiles Workflow Tools: Jira, Git, Confluence Key Responsibilities The role involves designing and automating ETL pipelines using FME, applying custom transformers, and scripting in Python for data validation and transformation. It requires working with spatial data in PostGIS, fixing geometry issues, and ensuring alignment with required coordinate systems. The engineer will also support dashboard integrations by creating SQL views and tracking processing metadata. Additional responsibilities include implementing automation through FME Server, cron jobs, and CI/CD pipelines, as well as collaborating with analysts and operations teams to translate business rules, interpret validation reports, and ensure compliance with LA and HMLR specifications. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Hyderābād
On-site
Job Summary: We are looking for an experienced Data Engineer with 4+ years of proven expertise in building scalable data pipelines, integrating complex datasets, and working with cloud-based and big data technologies. The ideal candidate should have hands-on experience with data modeling, ETL processes, and real-time data streaming. Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines and ETL workflows. Work with large datasets from various sources, ensuring data quality and consistency. Collaborate with Data Scientists, Analysts, and Software Engineers to support data needs. Optimize data systems for performance, scalability, and reliability. Implement data governance and security best practices. Troubleshoot data issues and identify improvements in data processes. Automate data integration and reporting tasks. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in data engineering or similar roles . Strong programming skills in Python , SQL , and Shell scripting . Experience with ETL tools (e.g., Apache Airflow, Talend, AWS Glue). Proficiency in data modeling , data warehousing , and database design . Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services like S3, Redshift, BigQuery, Snowflake . Experience with big data technologies such as Spark, Hadoop, Kafka . Strong understanding of data structures, algorithms , and system design . Familiarity with CI/CD tools , version control (Git), and Agile methodologies. Preferred Skills: Experience with real-time data streaming (Kafka, Spark Streaming). Knowledge of Docker , Kubernetes , and infrastructure-as-code tools like Terraform . Exposure to machine learning pipelines or data science workflows is a plus. Interested candidates can send their resume Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 6 days ago
0 years
4 - 7 Lacs
Gurgaon
On-site
A Software Engineer is curious and self-driven to build and maintain multi-terabyte operational marketing databases and integrate them with cloud technologies. Our databases typically house millions of individuals and billions of transactions and interact with various web services and cloud-based platforms. Once hired, the qualified candidate will be immersed in the development and maintenance of multiple database solutions to meet global client business objectives Job Description: Key responsibilities: Have 2 – 4 yrs exp Will work in close Supervision of Tech Leads/ Lead Devs Should able to understand detailed design with minimal explanation. Individual Contributor. Resource will able to perform mid to complex level tasks with minimal supervision. Senior team members will peer review assigned tasks. Build and configure our Marketing Database/Data environment platform by integrating feeds as per detailed design/transformation logic. Good knowledge of Unix scripting &/or Python Must have strong knowledge in SQL Good understanding of ETL (Talend, Informatica, Datastage, Ab Initio etc) as well as database skills (Oracle, SQL server, Teradata, Vertica, redshift, Snowflake, Big query, Azure DW etc). Fair understanding of relational databases, stored procs etc. Experience in Cloud computing (one or more of AWS, Azure, GCP) will be plus. Less supervision & guidance from senior resources will be required. Location: DGS India - Gurugram - Golf View Corporate Towers Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 6 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Company Gentrack provides leading utilities across the world with innovative cleantech solutions. The global pace of change is accelerating, and utilities need to rebuild for a more sustainable future. Working with some of the world’s biggest energy and water companies, as well as innovative challenger brands, we are helping companies reshape what it means to be a utilities business. We are driven by our passion to create positive impact. That is why utilities rely on us to drive innovation, deliver great customer experiences, and secure profits. Together, we are renewing utilities. Our Values and Culture Colleagues at Gentrack are one big team, working together to drive efficiency in two of the planet’s most precious resources, energy, and water. We are passionate people who want to drive change through technology and believe in making a difference. Our values drive decisions and how we interact and communicate with customers, partners, shareholders, and each other. Our core values are~ Ø Respect for the planet Ø Respect for our customers and Ø Respect for each other Gentrackers are a group of smart thinkers and dedicated doers. We are a diverse team who love our work and the people we work with and who collaborate and inspire each other to deliver creative solutions that make our customers successful. We are a team that shares knowledge, asks questions, raises the bar, and are expert advisers. At Gentrack we care about doing honest business that is good for not just customers but families, communities, and ultimately the planet. Gentrackers continuously look for a better way and drive quality into everything they do. This is a truly exciting time to join Gentrack with a clear growth strategy and a world class leadership team working to fulfil Gentrack’s global aspirations by having the most talented people, an inspiring culture, and a technology first, people centric business. The Opportunity We are seeking an experienced Data Migration Manager to lead our global data migration practice and drive successful delivery of complex data migrations in our Customers transformation projects. The Data Migration Manager will be responsible for overseeing the strategic planning, execution, and management of data migration initiatives across our global software implementation projects. This critical role ensures seamless data transition, quality, and integrity for our clients. In line with our value of ‘Respect for the Planet’, we encourage all our people to provide leadership through participating in our sustainability initiatives, including activities ran by the regional GSTF. Including encouraging our people to engage and drive sustainable behaviours, supporting organisational change and global sustainability programs. The Specifics Lead and manage a global team of data migration experts, providing strategic direction and professional development Develop and maintain comprehensive data migration methodologies and best practices applicable to utility sector software implementations Design and implement robust data migration strategies that address the unique challenges of utility industry data ecosystems Collaborate with solution architects, project managers, and client teams to define detailed data migration requirements and approaches Provide guidance and advice across the entire data migration lifecycle, including~ Source data assessment and profiling Data cleansing and transformation strategies Migration planning and risk mitigation Execution of migration scripts and processes Validation, reconciliation and quality assurance of migrated data Ensure compliance with data protection regulations and industry-specific standards across different global regions Develop and maintain migration toolsets and accelerators to improve efficiency and repeatability of migration processes Create comprehensive documentation, migration playbooks, and standard operating procedures Conduct regular performance reviews of migration projects and implement continuous improvement initiatives Manage and mitigate risks associated with complex data migration projects Provide technical leadership and mentorship to the data migration team What we're looking for (you don’t need to be a guru at all, we’re looking forward to coaching and collaborating with you)~ Proficiency in data migration tools (e.g., Informatica, Talend, Microsoft SSIS) Experience with customer information system (CIS) and/or billing system migrations Knowledge of data governance frameworks Understanding of utility industry data models and integration challenges Familiarity with cloud migration strategies, including Salesforce Strategic thinking and innovative problem-solving Strong leadership and team management capabilities Excellent written and verbal communication skills across technical and non-technical audiences Ability to oversee a number of complex, globally dispersed projects Cultural sensitivity and adaptability What we offer in return~ Personal growth – in leadership, commercial acumen and technical excellence To be part of a global, winning high growth organization – with a career path to match A vibrant, culture full of people passionate about transformation and making a difference -with a one team, collaborative ethos A competitive reward package that truly awards our top talent A chance to make a true impact on society and the planet Gentrack want to work with the best people, no matter their background. So, if you are passionate about learning new things and keen to join the mission, you will fit right in. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position: Database Location: Noida, India www.SEW.ai Who We Are: SEW, with its innovative and industry-leading cloud platforms, delivers the best Digital Customer Experiences (CX) and Workforce Experiences (WX), powered by AI, ML, and IoT Analytics to the global energy, water, and gas providers. At SEW, the vision is to Engage, Empower, and Educate billions of people to save energy and water. We partner with businesses to deliver platforms that are easy-to-use, integrate seamlessly, and help build a strong technology foundation that allows them to become future- ready. Searching for your dream job? We are a true global company that values building meaningful relationships and maintaining a passionate work environment while fostering innovation and creativity. At SEW, we firmly believe that each individual contributes to our success and in return, we provide opportunities from them to learn new skills and build a rewarding professional career. A Couple of Pointers: • We are the fastest growing company with over 420+ clients and 1550+ employees. • Our clientele is based out in the USA, Europe, Canada, Australia, Asia Pacific, Middle East • Our platforms engage millions of global users, and we keep adding millions every month. • We have been awarded 150+ accolades to date. Our clients are continually awarded by industry analysts for implementing our award-winning product. • We have been featured by Forbes, Wall Street Journal, LA Times for our continuous innovation and excellence in the industry. Who we are looking? An ideal candidate who must demonstrate in-depth knowledge and understanding of RDBMS concepts and experienced in writing complex queries and data integration processes in SQL/TSQL and NoSQL. T his individual will be responsible for helping the design, development and implementation of new and existing applications. Roles and Responsibilities: • Reviews the existing database design and data management procedures and provides recommendations for improvement • Responsible for providing subject matter expertise in design of database schemes and performing data modeling (logical and physical models), for product feature enhancements as well as extending analytical capabilities. •Develop technical documentation as needed. • Architect, develop, validate and communicate Business Intelligence (BI) solutions like dashboards, reports, KPIs, instrumentation, and alert tools. • Define data architecture requirements for cross-product integration within and across cloud-based platforms. • Analyze, architect, develop, validate and support integrating data into the SEW platform from external data source; Files (XML, CSV, XLS, etc.), APIs (REST, SOAP), RDBMS. • Perform thorough analysis of complex data and recommend actionable strategies. • Effectively translate data modeling and BI requirements into the design process. • Big Data platform design i.e. tool selection, data integration, and data preparation for predictive modeling • Required Skills: • Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience in Extraction, Transformation and Loading ETL work using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developer in Oracle, MS SQL or other enterprise database with focus on building data integration processes. • Candidate should have any NoSql technology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with Big Data platforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding data warehousing concepts and decision support systems. • Ability to deal with sensitive and confidential material and adhere to worldwide data security and Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skill Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Tamil Nadu, India
On-site
Job Title: Data Engineer About VXI VXI Global Solutions is a BPO leader in customer service, customer experience, and digital solutions. Founded in 1998, the company has 40,000 employees in more than 40 locations in North America, Asia, Europe, and the Caribbean. We deliver omnichannel and multilingual support, software development, quality assurance, CX advisory, and automation & process excellence to the world’s most respected brands. VXI is one of the fastest growing, privately held business services organizations in the United States and the Philippines, and one of the few US-based customer care organizations in China. VXI is also backed by private equity investor Bain Capital. Our initial partnership ran from 2012 to 2016 and was the beginning of prosperous times for the company. During this period, not only did VXI expand our footprint in the US and Philippines, but we also gained ground in the Chinese and Central American markets. We also acquired Symbio, expanding our global technology services offering and enhancing our competitive position. In 2022, Bain Capital re-invested in the organization after completing a buy-out from Carlyle. This is a rare occurrence in the private equity space and shows the level of performance VXI delivers for our clients, employees, and shareholders. With this recent investment, VXI has started on a transformation to radically improve the CX experience though an industry leading generative AI product portfolio that spans hiring, training, customer contact, and feedback. Job Description: We are seeking talented and motivated Data Engineers to join our dynamic team and contribute to our mission of harnessing the power of data to drive growth and success. As a Data Engineer at VXI Global Solutions, you will play a critical role in designing, implementing, and maintaining our data infrastructure to support our customer experience and management initiatives. You will collaborate with cross-functional teams to understand business requirements, architect scalable data solutions, and ensure data quality and integrity. This is an exciting opportunity to work with cutting-edge technologies and shape the future of data-driven decision-making at VXI Global Solutions. Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, transform, and store data from various sources. Collaborate with business stakeholders to understand data requirements and translate them into technical solutions. Implement data models and schemas to support analytics, reporting, and machine learning initiatives. Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness. Ensure data quality and integrity by implementing data validation, monitoring, and error handling mechanisms. Collaborate with data analysts and data scientists to provide them with clean, reliable, and accessible data for analysis and modeling. Stay current with emerging technologies and best practices in data engineering and recommend innovative solutions to enhance our data capabilities. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven 8+ years' experience as a data engineer or similar role Proficiency in SQL, Python, and/or other programming languages for data processing and manipulation. Experience with relational and NoSQL databases (e.g., SQL Server, MySQL, Postgres, Cassandra, DynamoDB, MongoDB, Oracle), data warehousing (e.g., Vertica, Teradata, Oracle Exadata, SAP Hana), and data modeling concepts. Strong understanding of distributed computing frameworks (e.g., Apache Spark, Apache Flink, Apache Storm) and cloud-based data platforms (e.g., AWS Redshift, Azure, Google BigQuery, Snowflake) Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker, Apache Superset) and data pipeline tools (e.g. Airflow, Kafka, Data Flow, Cloud Data Fusion, Airbyte, Informatica, Talend) is a plus. Understanding of data and query optimization, query profiling, and query performance monitoring tools and techniques. Solid understanding of ETL/ELT processes, data validation, and data security best practices Experience in version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively with cross-functional teams. Join VXI Global Solutions and be part of a dynamic team dedicated to driving innovation and delivering exceptional customer experiences. Apply now to embark on a rewarding career in data engineering with us! Show more Show less
Posted 6 days ago
6.0 years
0 Lacs
Sanganer, Rajasthan, India
On-site
Unlock yourself. Take your career to the next level. At Atrium, we live and deliver at the intersection of industry strategy, intelligent platforms, and data science — empowering our customers to maximize the power of their data to solve their most complex challenges. We have a unique understanding of the role data plays in the world today and serve as market leaders in intelligent solutions. Our data-driven, industry-specific approach to business transformation for our customers places us uniquely in the market. Who are you? You are smart, collaborative, and take ownership to get things done. You love to learn and are intellectually curious in business and technology tools, platforms, and languages. You are energized by solving complex problems and bored when you don’t have something to do. You love working in teams and are passionate about pulling your weight to make sure the team succeeds. What will you be doing at Atrium? In this role, you will join the best and brightest in the industry to skillfully push the boundaries of what’s possible. You will work with customers to make smarter decisions through innovative problem-solving using data engineering, Analytics, and systems of intelligence. You will partner to advise, implement, and optimize solutions through industry expertise, leading cloud platforms, and data engineering. As a Snowflake Data Engineering Lead , you will be responsible for expanding and optimizing the data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. You will support the software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. In This Role, You Will Lead the design and architecture of end-to-end data warehousing and data lake solutions, focusing on the Snowflake platform, incorporating best practices for scalability, performance, security, and cost optimization Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Lead and mentor both onshore and offshore development teams, creating a collaborative environment Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, DBT, Python, AWS, and Big Data tools Development of ELT processes to ensure timely delivery of required data for customers Implementation of Data Quality measures to ensure accuracy, consistency, and integrity of data Design, implement, and maintain data models that can support the organization's data storage and analysis needs Deliver technical and functional specifications to support data governance and knowledge sharing In This Role, You Will Have Bachelor's degree in Computer Science, Software Engineering, or equivalent combination of relevant work experience and education 6+ years of experience delivering consulting services to medium and large enterprises. Implementations must have included a combination of the following experiences: Data Warehousing or Big Data consulting for mid-to-large-sized organizations 3+ years of experience specifically with Snowflake, demonstrating deep expertise in its core features and advanced capabilities Strong analytical skills with a thorough understanding of how to interpret customer business needs and translate those into a data architecture SnowPro Core certification is highly desired Hands-on experience with Python (Pandas, Dataframes, Functions) Strong proficiency in SQL (Stored Procedures, functions), including debugging, performance optimization, and database design Strong Experience with Apache Airflow and API integrations Solid experience in any one of the ETL/ELT tools (DBT, Coalesce, Wherescape, Mulesoft, Matillion, Talend, Informatica, SAP BODS, DataStage, Dell Boomi, etc.) Nice to have: Experience in Docker, DBT, data replication tools (SLT, Fivetran, Airbyte, HVR, Qlik, etc), Shell Scripting, Linux commands, AWS S3, or Big data technologies Strong project management, problem-solving, and troubleshooting skills with the ability to exercise mature judgment Enthusiastic, professional, and confident team player with a strong focus on customer success who can present effectively even under adverse conditions Strong presentation and communication skills Next Steps Our recruitment process is highly personalized. Some candidates complete the hiring process in one week, others may take longer, as it’s important we find the right position for you. It's all about timing and can be a journey as we continue to learn about one another. We want to get to know you and encourage you to be selective - after all, deciding to join a company is a big decision! At Atrium, we believe a diverse workforce allows us to match our growth ambitions and drive inclusion across the business. We are an equal opportunity employe,r and all qualified applicants will receive consideration for employment. Show more Show less
Posted 6 days ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Job Title: Talend Lead Experience: 8+ Years (5–6+ years in Talend) Location: Hybrid – Bengaluru, Hyderabad, Chennai, Pune (In-office 1–2 days/week) Work Hours: 2 PM – 11 PM IST (Work from office till 6 PM, then resume remotely) Notice Period: Immediate to 30 days only (Candidates already serving notice preferred) Budget: ₹27–30 LPA (inclusive of 5% variable) Job Type: Full-time Key Responsibilities: Lead and mentor a team of Talend ETL developers (6+ months of lead/mentoring experience acceptable). Design, develop, and optimize Talend-based ETL solutions with a strong focus on performance and data quality. Collaborate with cross-functional stakeholders to define scope, timelines, and deliverables. Implement robust data integration pipelines using Talend and AWS services. Ensure smooth data flow between source and target systems such as relational databases, APIs, and flat files. Drive best practices in code, documentation, error handling, and job scheduling. Participate in project planning, troubleshooting, and technical reviews. Contribute to system integration testing, unit testing, and deployment processes. Technical Skills Required: Strong experience in Talend Studio, Talend TAC/TMC . Advanced SQL for querying and data transformations. Hands-on experience with AWS Cloud Services (S3, Redshift, EC2, Glue, Athena, etc.). Proficiency in working with data from relational and NoSQL databases , flat files, and APIs (REST/SOAP). Knowledge of ETL/ELT , data profiling, and quality checks. Familiarity with job scheduling tools and performance monitoring frameworks. Comfortable with Git , Terraform , GitLab , and VS Code . Exposure to scripting languages like Python or Shell is a plus. Preferred Qualifications: Bachelor’s in Computer Science, IT, or a related field. Talend and AWS certifications are highly desirable. Experience with US healthcare clients is a big plus. Familiarity with Agile methodology and DevOps practices . Understanding of Big Data platforms (Hadoop, Spark) and Talend Big Data modules. Important Screening Criteria: No short-term projects or employment gaps over 3 months. No candidates from JNTU. Strictly immediate joiners or candidates with up to 30 days' notice. Job Types: Full-time, Permanent Pay: ₹2,700,000.00 - ₹3,000,000.00 per year Schedule: Day shift Evening shift Monday to Friday Ability to commute/relocate: Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Talend: 6 years (Required) Work Location: In person
Posted 6 days ago
10.0 years
0 Lacs
India
Remote
Role: Senior Azure / Data Engineer with (ETL/ Data warehouse background) Location: Remote, India Duration: Long Term Contract Need with 10+ years of experience Must have Skills : • Min 5 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks, etc. Azure experience is preferred over other cloud platforms. • 10 + years of proven experience with SQL, schema design, and dimensional data modeling • Solid knowledge of data warehouse best practices, development standards, and methodologies • Experience with ETL/ELT tools like ADF, Informatica, Talend, etc., and data warehousing technologies like Azure Synapse, Azure SQL, Amazon Redshift, Snowflake, Google Big Query, etc.. • Strong experience with big data tools(Databricks, Spark, etc..) and programming skills in PySpark and Spark SQL. • Be an independent self-learner with a “let’s get this done” approach and the ability to work in Fast paced and Dynamic environment. • Excellent communication and teamwork abilities. Nice-to-Have Skills: • Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. • SAP ECC /S/4 and Hana knowledge. • Intermediate knowledge on Power BI • Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Show more Show less
Posted 6 days ago
3.0 years
0 Lacs
India
Remote
Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Key Responsibilities Design, develop, and maintain ETL/ELT workflows using Talend Data Integration tools. Implement and optimize data pipelines to ingest, transform, and load data into Snowflake. Collaborate with data analysts, architects, and business stakeholders to gather requirements and translate them into technical specifications. Monitor data pipelines for performance and reliability, and troubleshoot issues as needed. Develop and maintain technical documentation related to ETL processes and data models. Perform data quality checks and ensure data governance policies are followed. Optimize Snowflake warehouse performance and storage costs through partitioning, clustering, and query tuning. Support the migration of legacy ETL processes to Skills & Qualifications : 4+ years of experience with Talend Data Integration (preferably Talend Open Studio or Talend Enterprise). 2+ years of hands-on experience with Snowflake data warehouse platform. Proficient in SQL, data modeling, and performance tuning. Experience with cloud platforms (AWS, Azure, or GCP) and cloud data ecosystems. Strong understanding of data warehousing concepts, ETL best practices, and data governance. Experience in working with APIs, flat files, and various data sources/formats. Familiarity with version control systems (e.g., Git) and CI/CD pipelines for data workflows. Excellent communication and collaboration skills. (ref:hirist.tech) Show more Show less
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us is a growing Global Healthcare Consulting Firm, headquartered in Princeton, New Jersey. Our expert teams of Business Analysts, based across the US, Canada, Europe, and India, provide Analytics and Business Solutions using our worldwide delivery models for a wide range of clients. Our clients include established, multinational BioPharma leaders and innovators, as well as entrepreneurial firms on the cutting edge of science. We have deep expertise in Forecasting, Business Analytics, Competitive Intelligence, Sales Analytics, and the Analytics Centre of Excellence Model. Our wealth of therapeutic area experience cuts across Oncology, Immunoscience, CNS, CV-Met, and Rare Diseases. We support our clients' needs in Primary Care, Specialty Care, and Hospital business units, and we have managed portfolios in the Biologics space, Branded Pharmaceuticals, Generics, APIs, Diagnostics, and Packaging & Delivery Systems. Responsibilities Working closely with Business teams/stakeholders across the pharmaceutical value chain and developing reports and dashboards that tell a story. Recommending KPIs and helping generate custom analysis and insights. Propose newer visualization ideas for our customers, considering the audience type. Designing Tableau dashboards and reports that are self-explanatory. Keep the user at the center while designing the reports and thereby enhancing the user experience Requirement gathering while working closely with our Global Clients. Mentor other developers on the team on Tableau-related technical challenges. Propagate Tableau best practices within and across the team. Ability to set up reports that can be maintained with ease and are scalable to other use cases. Interacting with the AI/ML team and incorporating new ideas into the final deliverables for the client. Work closely with cross teams like Advanced Analytics and Competitive Intelligence and Forecasting. Develop and foster client relationships and serve as a point of contact for projects. Qualifications And Areas Of Expertise Educational Qualification : BE/BTech/MTech/MCA from a reputed institute. Minimum 3 - 5 years of experience. Proficient with tools including Tableau Desktop, Tableau Server, MySQL, MS Excel, and ETL tools (Alteryx, Tableau Prep, or Talend). Knowledge of SQL. Experience in advanced LOD calcs, custom visualizations, data cleaning, and restructuring. Strong analytical and problem-solving skills with the ability to question facts. Excellent written and oral communication skills. Nice To Have A Valid U.S. business Visa. Hands-on experience in Tableau, Python, and R. Hands-on experience with Qlik Sense and Power BI. Experience with Pharma / Healthcare data. (ref:hirist.tech) Show more Show less
Posted 6 days ago
46.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About FieldAssist Founded in 2014, FieldAssist is a cutting-edge B2B SaaS platform revolutionizing the route-to-market strategies for Consumer Packaged Goods (CPG) companies. Headquartered in Gurgaon with offices in Mumbai and Bangalore, FieldAssist empowers over 600+ brandsincluding Bisleri, Emami, Haldirams, Eureka Forbes, Philips, Jockey, and Nobel Hygieneto automate and optimize their sales supply chains. Our technology is trusted by clients across 10 countries in Asia and Africa. Every day, more than 100,000 users leverage FieldAssist to reach 7.5 million outlets, making it a backbone of the technology-driven transformation in the CPG industry. Job Overview We are looking for a highly skilled Lead Data Analyst with deep expertise in B2B analytics, Power BI, ETL processes, and advanced SQL. In this role, you will be responsible for driving data-based decision-making through robust reporting, optimized data workflows, and insightful analysis of large and complex datasets. Key Responsibilities Design and develop interactive dashboards and reports using Power BI to deliver real-time, actionable insights. Build and enhance ETL pipelines to seamlessly extract, transform, and load data from multiple sources. Write, optimize, and maintain complex SQL queries to support reporting and analytical requirements. Analyze large datasets to identify patterns, trends, and opportunities that can impact business outcomes. Collaborate cross-functionally with internal stakeholders to gather data requirements and translate them into analytical solutions. Ensure data accuracy, consistency, and quality across platforms and reporting tools. Mentor junior analysts and contribute to the evolution of the data analytics function. Key Requirements 46 years of experience in data analytics, with a strong background in B2B SaaS or CPG domains. Expertise in Power BI, including DAX and dashboard creation. Proficiency in ETL processes and working with tools such as SQL Server Integration Services (SSIS), Talend, or similar. Strong command of Advanced SQL for data manipulation and reporting. Experience working with large datasets and data warehouses. Strong problem-solving skills and business acumen. Excellent communication and stakeholder management skills. (ref:hirist.tech) Show more Show less
Posted 6 days ago
125.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position In Roche Informatics, we build on Roche's 125-year history as one of the world's largest biotech companies, globally recognized for providing transformative innovative solutions across major disease areas. We combine human capabilities with cutting-edge technological innovations to do now what our patients need next. Our commitment to our patients' needs motivates us to deliver technology that evolves the practice of medicine. Be part of our inclusive team at Roche Informatics, where we're driven by a shared passion for technological novelties and optimal IT solutions. About The Position Data Engineer, who will work closely with multi-disciplinary and multi-cultural teams to build structured, high-quality data solutions. The person may be leading technical squads. These solutions will be leveraged across Enterprise , Pharma and Diagnostics solutions to help our teams fulfill our mission: to do now what patients need next. In this position, you will require hands-on expertise in ETL pipeline development, data engineering. You should also be able to provide direction and guidance to developers, oversee the development and unit testing, as well as document the developed solution. Building strong customer relationships for ongoing business is also a key aspect of this role. To succeed in this position, you should have experience with Cloud-based Data Solution Architectures, the Software Development Life Cycle (including both Agile and waterfall methodologies), Data Engineering and ETL tools/platforms, and data modeling practices. Your key responsibilities: Building and optimizing data ETL pipelines to support data analytics Developing and implementing data integrations with other systems and platforms Maintaining documentation for data pipelines and related processes Logical and physical modeling of datasets and applications Making Roche data assets accessible and findable across the organization Explore new ways of building, processing, and analyzing data in order to deliver insights to our business partners Continuously refine data quality with testing, tooling and performance evaluation Work with business and functional stakeholders to understand data requirements and downstream analytics needs Partner with business to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Foster a data-driven culture throughout the team and lead data engineering projects that will have an impact throughout the organization Work with data and analytics experts to strive for greater functionality in our data systems and products; and help to grow our data team with exceptional engineers Your qualifications and experience: Education in related fields (Computer Science, Computer Engineering, Mathematical Engineering, Information Systems) or job experience preferably within multiple Data Engineering technologies 4+ years experience with ETL development, data engineering and data quality assurance Good Experience on Snowflake and its features Hands on experience as Data Engineering in Cloud Data Solutions using Snowflake Experienced working with Cloud Platform Services (AWS/Azure/GCP) Experienced in ETL/ETL technologies like Talend/DBT or other ETL platforms Experience in preparing and reviewing new data flows patterns Excellent Python Skills Strong RDBMS concepts and SQL development skills Strong focus on data pipelines automation Exposure in quality assurance and data quality activities are an added advantage. DevOps/ DataOps experience (especially Data operations preferred) Readiness to work with multiple tech domains and streams Passionate about new technologies and experimentation Experience with Inmuta and Montecarlo is a plus What you get: Good and stable working environment with attractive compensation and rewards package (according to local regulations); Annual bonus payment based on performance; Access to various internal and external training platforms (e.g. Linkedin Learning); Experienced and professional colleagues and workplace that supports innovation; Multiple Savings Plans with Employer Match Company’s emphasis on employees’ wellness and work-life balance ( (e.g. generous vacation days and OneRoche Wellness Days ), Workplace flexibility policy; State of art working environment and facilities; And many more that the Talent Acquisition Partner will be happy to talk about! Who we are A healthier future drives us to innovate. Together, more than 100’000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Let’s build a healthier future, together. Roche is an Equal Opportunity Employer. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management Expertise around Data : Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes. File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Be able to elicit business needs and define requirements for reporting solutions which includes driving consensus across multiple stakeholders who often have different priorities and needs as well as providing guidance for business stakeholders as needed to clarify requirements and presentation, format and layout options for the information Support an iterative report development process and serve as a liaison between the BI developers and the business Build scalable, fault-tolerant batch and real-time data ingestion pipelines, data transformation and data mining jobs Analysis and development of Cloud and Big data solutions Assemble large, complex analytics relevant quality datasets from different source systems in the organization Work with different stakeholders to build data process automations for more efficient and reliable business processes Implement software components needed for better data platform governance Manage large scaled Data Repositories, including its maintenance, scheduled backups and enhancements Implement data and platform products to enable different users to carry out their tasks with ease and clarity Work collaboratively with different users and stakeholders to ensure their success with right technical solutions Perform different data warehousing activities like transformation, enrichment, dataset metadata management for the team to effectively use the platform Present analysis and interpretation of findings for operational and business review, planning and new product development Partner with BI and Report development resources to design and prototype innovative solutions Support end-users with answering questions, developing training materials and conducting training activities as needed Document technical applications, specifications, and enhancements Recommend ways to improve data reliability, quality and efficiency Maintain production systems (Talend, Spark/Scala, Java microservices, Kafka, Hadoop, Cassandra, Elasticsearch) Develop reusable patterns and encourage innovation that will increase team velocity Anticipate issues and act proactively to address potential issues Work with sometimes ambiguous / conceptual requirements and guide the technical team to provide functionality with the right amount of engineering Lead engineers in making sound, sustainable, and practical technical decisions Foster high-performance, collaborative technical work resulting in high-quality output Collaborate on the design with other team members and product owners, both inside and outside the core team Work with geographically distributed teams, with ample opportunity to learn from and mentor teammates in a fast-paced environment Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 4+ years of hands-on Data Analyst 4+ years of experience with Python programing based development 4+ years of experience in relational databases (SQL Query, Stored Procedure. CTE, Triggers etc.) 4+ years of experience in Azure Cloud and Linux 2+ years of experience building and optimizing data pipelines, architectures and data sets 2+ years of Hands-on experience on Shell Scripting on Linux Experience with GitHub, Linux VM Experience with Visualization Tools like Power BI Demonstrated success in building design patterns and software engineering best practices Preferred Qualifications Microsoft Power Platform (PowerApps, Power Automate and Power BI) experience or knowledge Microsoft SharePoint Lists and Forms experience or knowledge DEVOPS experience At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 6 days ago
9.0 - 14.0 years
20 - 35 Lacs
Hyderabad, Pune
Work from Office
Our client is Leading global IT Services and Consulting Organization Job Description: Shift Timings : 12:00 noon to 9:30 PM IST Location : HYD & Pune only Role :Architect Key skills required for the job are: Talend DI (Mandatory) and having good exposure to RDBMS databases like Oracle, Sql server. • 3+ years of experience in implementation of ETL projects in a large-scale enterprise data warehouse environment and at least one Successful implementation Talend with DWH is must. • As a Senior Developer, candidate is responsible for development, support, maintenance, and implementation of a complex project module. Candidate is expected to have depth of knowledge of specified technological area, which includes knowledge of applicable processes, methodologies, standards, products and frameworks. • She/he should have experience in application of standard software development principles using Talend. • She/he should be able to work as an independent team member, capable of applying judgment to plan and execute HWB tasks. • Build reusable Talend jobs, routines, and components to support data integration, quality and transformations.
Posted 6 days ago
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Req ID: 321498 We are currently seeking a Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties"¢ Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. "¢ Work closely with Data modeller to ensure data models support the solution design "¢ Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. "¢ Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. "¢ Develop documentation and artefacts to support projects Minimum Skills Required"¢ ADF "¢ Fivetran (orchestration & integration) "¢ SQL "¢ Snowflake DWH
Posted 6 days ago
3.0 - 8.0 years
2 - 6 Lacs
Pune
Work from Office
Req ID: 324676 We are currently seeking a Data Analyst to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Extract, transform, and load (ETL) data from various sources, ensuring data quality, integrity, and accuracy. Perform data cleansing, validation, and preprocessing to prepare structured and unstructured data for analysis. Develop and execute queries, scripts, and data manipulation tasks using SQL, Python, or other relevant tools. Analyze large datasets to identify trends, patterns, and correlations, drawing meaningful conclusions that inform business decisions. Create clear and concise data visualizations, dashboards, and reports to communicate findings effectively to stakeholders. Collaborate with clients and cross-functional teams to gather and understand data requirements, translating them into actionable insights. Work closely with other departments to support their data needs. Collaborate with Data Scientists and other analysts to support predictive modeling, machine learning, and statistical analysis. Continuously monitor data quality and proactively identify anomalies or discrepancies, recommending corrective actions. Stay up-to-date with industry trends, emerging technologies, and best practices to enhance analytical techniques. Assist in the identification and implementation of process improvements to streamline data workflows and analysis. Basic Qualifications: 3 + years of proficiency in data analysis tools such as [Tools - e.g., Excel, SQL, R, Python]. 3+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Undergraduate or Graduate degree preferred Ability to travel at least 25%." Preferred Skills: Strong proficiency in data analysis tools such as Python, SQL, Talend (any ETL). Experience with data visualization tools like PowerBI. Experience with cloud data platforms . Familiarity with ETL (Extract, Transform, Load) processes and tools. Knowledge of machine learning techniques and tools. Experience in a specific industry (e.g., financial services, healthcare, manufacturing) can be a plus. Understanding of data governance and data privacy regulations. Ability to query and manipulate databases and data warehouses. Excellent analytical and problem-solving skills. Strong communication skills with the ability to explain complex data insights to non-technical stakeholders. Detail-oriented with a commitment to accuracy.
Posted 6 days ago
5.0 - 9.0 years
18 - 25 Lacs
Nagpur, Pune
Work from Office
Design, develop, and deploy Talend ETL jobs to move data from source to target (ODS). Implement one-to-one schema mappings for structured data transfer. Ensure performance optimization, data quality, and error handling across all pipelines.
Posted 6 days ago
5.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Responsibilities The Sr. Integration Developer (Senior Software Engineer) will work in the Professional Services Team and play a significant role in designing and implementing complex integration solutions using the Adeptia platform. This role requires hands-on expertise in developing scalable and efficient solutions to meet customer requirements. The engineer will act as a key contributor to the team's deliverables while mentoring junior engineers. They will ensure high-quality deliverables by collaborating with cross-functional teams and adhering to industry standards and best practices. Responsibilities include but not limited to: ● Collaborate with customers' Business and IT teams to understand integration requirements in the B2B/Cloud/API/Data/ETL/EAI Integration space and implement solutions using the Adeptia platform ● Design, develop, and configure complex integration solutions, ensuring scalability, performance, and maintainability. ● Take ownership of assigned modules and lead the implementation lifecycle from requirement gathering to production deployment. ● Troubleshoot issues during implementation and deployment, ensuring smooth system performance. ● Guide team members in addressing complex integration challenges and promote best practices and performance practices. ● Collaborate with offshore/onshore and internal teams to ensure timely execution and coordination of project deliverables. ● Write efficient, well-documented, and maintainable code, adhering to established coding standards. ● Review code and designs of team members, providing constructive feedback to improve quality. ● Participate in Agile processes, including Sprint Planning, Daily Standups, and Retrospectives, ensuring effective task management and delivery. ● Stay updated with emerging technologies to continuously enhance technical expertise and team skills. Essential Skills: ● Technical ○ 5-7 years of hands-on experience in designing and implementing integration solutions across B2B, ETL, EAI, Cloud, API & Data Integration environments using leading platforms such as Adeptia, Talend, MuleSoft, or equivalent enterprise-grade tools. ○ Proficiency in designing and implementing integration solutions, including integration processes, data pipelines, and data mappings, to facilitate the movement of data between applications and platforms. ○ Proficiency in applying data transformation and data cleansing as needed to ensure data quality and consistency across different data sources and destinations. ○ Good experience in performing thorough testing and validation of data integration processes to ensure accuracy, reliability, and data integrity. ○ Proficiency in working with SOA, RESTful APIs, and SOAP Web Services with all security policies. ○ Good understanding and implementation experience with various security concepts, best practices,Security standards and protocols such as OAUTH, SSL/TLS, SSO, SAML, IDP (Identity Provider). ○ Strong understanding of XML, XSD, XSLT, and JSON. ○ Good understanding in RDBMS/NoSQL technologies (MSSQL, Oracle, MySQL). ○ Proficiency with transport protocols (HTTPS, SFTP, JDBC) and experiences of messaging systems such as Kafka, ASB(Azure Service Bus) or RabbitMQ. ○ Hands-on experience in Core Java and exposure to commonly used Java frameworks ● Non-Technical ○ 5-7 years Experience working in a Services Delivery Organization directly reporting to the client ○ Strong communication skills. ○ Develop solutions and POC’s based on customer and project needs ○ Excellent documentation, process diagraming skills is needed ○ Excellent interpersonal skills for building and maintaining positive relationships. ○ Exceptional collaboration skills with the ability to work effectively with customers and internal teams. ○ Experienced in gathering business requirements and translating them into actionable technical plans, and aligning teams for successful execution. ○ Strong analytical, troubleshooting, and problem-solving skills. ○ Proven ability to lead and mentor junior team members. ○ Self-motivated with a strong commitment to delivering high-quality results under tight deadlines. Desired Skills: ● Technical ○ Familiarity with JavaScript frameworks like ReactJS, AngularJS, or NodeJS. ○ Exposure to integration standards (EDI, EDIFACT, IDOC). ○ Experience with modern web UI tools and frameworks. ○ Exposure to DevOps tools such as Git, Jenkins, and CI/CD pipelines. ● Non-Technical ○ Onshore Experience working directly with Customers ○ Strong time management skills and the ability to handle multiple priorities. ○ Detail-oriented and enthusiastic about learning new tools and technologies. ○ Committed to delivering high-quality results. ○ Flexible, responsible, and focused on quality work. ○ Ability to prioritize tasks, work under pressure, and collaborate with cross-functional teams. Show more Show less
Posted 1 week ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
ETL testers with Automation Testing experience in DBT and snowflake. DBT, experience in Talend and snowflake.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2