Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Req ID: 322003 We are currently seeking a Sr. ETL Developers to join our team in Bangalore, Karntaka (IN-KA), India (IN). Strong hands-on experience in SQLs, PL/SQLs [Procs, Functions]. Expert level knowledge ETL flows & Jobs [ADF pipeline exp preferred]"‚"‚"‚"‚ Experience on MS-SQL [preferred], Oracle DB, PostgreSQL, MySQL. Good knowledge of Data Warehouse/Data Mart. Good knowledge of Data Structures/Models, Integrities constraints, Performance tuning etc. Good Knowledge in Insurance Domain (preferred)"‚"‚"‚"‚"‚"‚"‚"‚"‚ Total Exp7 "“ 10 Yrs.
Posted 1 week ago
12.0 - 17.0 years
7 - 11 Lacs
Chennai
Work from Office
Req ID: 303369 We are currently seeking a Enterprise Resource Planning Advisor to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Has more than 12 years of relevant experience with Oracle ERP Cloud Data migration and minimum 4 end to end ERP cloud implementation. Analyze Data and Mapping Work with functional teams to understand data requirements and develop mappings to enable smooth migration using FBDI (File-Based Data Import) and ADFdi. Develop and Manage FBDI Templates Design, customize, and manage FBDI templates to facilitate data import into Oracle SaaS applications, ensuring data structure compatibility and completeness. Configure ADFdi for Data Uploads Use ADFdi (Application Development Framework Desktop Integration) for interactive data uploads, enabling users to manipulate and validate data directly within Excel. Data Validation and Quality Checks Implement data validation rules and perform pre- and post-load checks to maintain data integrity and quality, reducing errors during migration. Execute and Troubleshoot Data Loads Run data loads, monitor progress, troubleshoot errors, and optimize performance during the data migration process. Collaborate with Stakeholders Coordinate with cross-functional teams, including project managers and business analysts, to align on timelines, resolve data issues, and provide migration status updates.
Posted 1 week ago
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Req ID: 321498 We are currently seeking a Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties"¢ Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. "¢ Work closely with Data modeller to ensure data models support the solution design "¢ Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. "¢ Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. "¢ Develop documentation and artefacts to support projects Minimum Skills Required"¢ ADF "¢ Fivetran (orchestration & integration) "¢ SQL "¢ Snowflake DWH
Posted 1 week ago
6.0 - 11.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Job Titleo9 Demand Planning Implementation Senior Consultant LocationBangalore DepartmentSupply Chain Consulting About the Role: We are seeking a highly motivated and skilled o9 Solutions Implementation Consultant to join our dynamic team. In this role, you will play a critical part in delivering successful end-to-end implementations of o9 Solutions, focusing specifically on the Demand Planning module. You will work closely with cross-functional teams including client stakeholders, business analysts, and technical teams to ensure seamless deployment of the o9 platform. Key Responsibilities: "¢ Lead the functional implementation of o9's Demand Planning solution for global and regional supply chain firms. "¢ Work closely with clients to gather business requirements and translate them into effective o9 platform configurations. "¢ Drive data integration by collaborating with client"™s IT teams and o9 technical consultants (data ingestion, transformation, modeling). "¢ Develop and execute test scenarios to validate functional and system performance. "¢ Support user training and change management initiatives during deployment. "¢ Act as a trusted advisor, guiding clients through system adoption and continuous improvement post-implementation. "¢ Coordinate with o9 technical teams and client-side stakeholders to troubleshoot and resolve issues. Required Qualifications: "¢ 4"“6 years of relevant experience in Supply Chain Planning or IT consulting, with at least 1"“2 full-cycle implementations of the o9 platform, specifically in Demand Planning. "¢ Strong understanding of demand forecasting concepts, statistical modeling, and S&OP processes. "¢ Hands-on experience in configuring o9 Demand Planning modules including DP foundational blocks, setting up master data and associated hierarchies, IBPL, creating active rules and procedures, setting up UIs and user access roles. "¢ Knowledge of SQL, Python, or integration tools (e.g. Informatica, SSIS) is a strong advantage. "¢ Strong analytical, problem-solving, and communication skills. "¢ Bachelor"™s or Master"™s degree in Engineering, Supply Chain, Operations, or related disciplines. Preferred Qualifications: "¢ Experience with other advanced planning systems (e.g., Kinaxis, Blue Yonder, SAP IBP) is a plus. "¢ Familiarity with Agile project management methodologies. "¢ o9 certification(s) on Technical Level 1 & 2, DP Ref Model.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients, and communities. Working at WPP means being part of a global network of more than 100,000 talented people dedicated to doing extraordinary work for our clients. We operate in over 100 countries, with corporate headquarters in New York, London and Singapore. WPP is a world leader in marketing services, with deep AI, data and technology capabilities, global presence and unrivalled creative talent. Our clients include many of the biggest companies and advertisers in the world, including approximately 300 of the Fortune Global 500. Our people are the key to our success. We're committed to fostering a culture of creativity, belonging and continuous learning, attracting and developing the brightest talent, and providing exciting career opportunities that help our people grow. Why we're hiring: This is a technology driven finance role. So we are looking for candidates with foundational knowledge in data science, automation technologies, and programming (e.g., Python), along with an interest in leveraging AI and digital tools for finance transformation. What you'll be doing: Audit Support & Reconciliation: Support reconciliation projects, particularly focusing on statutory accounts data, audit fees, and LEID discrepancies. Assist with data collection, analysis, validation, and cleaning to ensure accuracy for audit purposes. Help track audit progress and monitor audit fee adjustments. Audit Automation Support: Assist in developing and testing automated systems for financial audits and reconciliations. Collaborate with the team to implement reconciliation tools that enhance audit accuracy and efficiency. Data Validation and Cleaning: Support data validation efforts to ensure consistency and accuracy across various systems. Help clean and preprocess data for use in automated systems and analytical models. System Testing: Participate in testing and debugging automated systems to identify issues and improve functionality. Provide feedback on system performance and suggest enhancements. Data Quality Maintenance: Monitor and maintain data quality standards across audit automation processes. Ensure compliance with data governance policies. Process Documentation: Create detailed process maps and flowcharts to document the functioning of existing and new audit automation systems. Maintain up-to-date records of workflows and procedures. AI & Digital Innovation Contribution: Assist in the development and implementation of AI-driven solutions for financial processes. Support data automation projects using ETL tools and programming languages. Participate in the design and development of applications using Microsoft Power tools (PowerApps, Power Automate). Analyse financial data to identify opportunities for process improvement through digital enhancements. Training Material Preparation: Assist in preparing training materials to educate stakeholders on audit automation tools and processes. Collaboration and Learning: Stay updated on emerging trends in AI, automation, and digital finance to bring innovative ideas to the team. What you'll need: We’re looking for someone who is eager to learn, tech-savvy, and has a strong interest in digital innovation. To succeed in this role, you should have: Current enrolment in or recent graduation from a Finance, Accounting, Economics, Computer Science, Data Science, or Business Administration program. Basic knowledge of Python programming (required). Strong analytical and problem-solving skills with meticulous attention to detail. Proficiency in Microsoft Office applications, especially Excel. Excellent communication and organizational skills. Ability to work independently and collaboratively in a team environment. Genuine interest in AI, digital innovation, and data automation within the finance function. Familiarity with data validation, cleaning, or reconciliation tools is a plus. Experience with process mapping or flowchart creation is a bonus. In addition to the above, candidates should ideally possess at least one of the following competencies: Experience with ETL tools such as Informatica or Alteryx and knowledge of programming languages (e.g., SQL, Python beyond basic knowledge). Proficiency in Microsoft Power tools: PowerApps, Power Automate, Power BI. Demonstrable genuine interest in AI and digital innovation, and a foundational understanding of core Gen AI principles. What You’ll Gain: This role offers a unique opportunity to grow your technical and professional skills while working in a supportive and innovative environment. Here’s what you can expect: Hands-on experience in audit support, reconciliation, audit automation, and AI-driven solutions. Exposure to real-world data validation, system testing, and process improvement techniques in a global finance function. Opportunity to develop technical and professional skills in a supportive environment. Who you are: You're open : We are inclusive and collaborative; we encourage the free exchange of ideas; we respect and celebrate diverse views. We are open-minded: to new ideas, new partnerships, new ways of working. You're optimistic : We believe in the power of creativity, technology and talent to create brighter futures or our people, our clients and our communities. We approach all that we do with conviction: to try the new and to seek the unexpected. You're extraordinary: we are stronger together: through collaboration we achieve the amazing. We are creative leaders and pioneers of our industry; we provide extraordinary every day. What we'll give you: Passionate, inspired people – We aim to create a culture in which people can do extraordinary work. Scale and opportunity – We offer the opportunity to create, influence and complete projects at a scale that is unparalleled in the industry. Challenging and stimulating work – Unique work and the opportunity to join a group of creative problem solvers. Are you up for the challenge? We believe the best work happens when we're together, fostering creativity, collaboration, and connection. That's why we’ve adopted a hybrid approach, with teams in the office around four days a week. If you require accommodations or flexibility, please discuss this with the hiring team during the interview process. WPP is an equal opportunity employer and considers applicants for all positions without discrimination or regard to particular characteristics. We are committed to fostering a culture of respect in which everyone feels they belong and has the same opportunities to progress in their careers. Please read our Privacy Notice (https://www.wpp.com/en/careers/wpp-privacy-policy-for-recruitment) for more information on how we process the information you provide. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for a highly experienced Senior Data Engineer with deep expertise in Snowflake to lead efforts in optimizing the performance of our data warehouse to enable faster, more reliable reporting. You will be responsible for improving query efficiency, data pipeline performance, and overall reporting speed by tuning Snowflake environments, optimizing data models, and collaborating with Application development teams. Roles and Responsibilities Analyze and optimize Snowflake data warehouse performance to support high-volume, complex reporting workloads. Identify bottlenecks in SQL queries, ETL/ELT pipelines, and data models impacting report generation times. Implement performance tuning strategies including clustering keys, materialized views, result caching, micro-partitioning, and query optimization. Collaborate with BI teams and business analysts to understand reporting requirements and translate them into performant data solutions. Design and maintain efficient data models (star schema, snowflake schema) tailored for fast analytical querying. Develop and enhance ETL/ELT processes ensuring minimal latency and high throughput using Snowflake’s native features. Monitor system performance and proactively recommend architectural improvements and capacity planning. Establish best practices for data ingestion, transformation, and storage aimed at improving report delivery times. Experience with Unistore will be an added advantage
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience: 9 to 12 Years Must Have Experience in Python Team handling experience Experience in ETL/data warehouse testing (DBT/Informatica, Snowflake, SQL) Requirements 8-11 years of expertise in STLC, defect management, test strategy designing, planning, and approach. Should have experience with test requirement understanding, test data, test plan, and test case designing. Minimum 6+ years of work experience in UI, database, ETL testing. Experience in ETL/data warehouse testing (DBT/Informatica, Snowflake, SQL). Any experience with AWS/cloud hosted applications is an added advantage. Hands-on experience in writing database queries (preferably in PostgreSQL, Snowflake, MySQL, RDS). 3+ years of experience with automation scripts execution, maintenance, and enhancements with Selenium WebDriver (v3+)/Playwright, with programming experience in Python (must) with BDD – Gherkin and Behave, Pytest. Strong analytical, problem-solving, communication skills, collaboration, accountability, stakeholder management, passion to drive initiatives, risk highlighting, and team leading capabilities. Proven team leadership experience with a minimum of 2 people reporting. Experience with Agile methodologies, such as Scrum, Kanban. MS Power BI reporting. Front-end vs back-end validation – good to have. Responsibilities Perform lead role in ETL testing, UI testing, database testing, and team management. Understand the holistic requirements, review and analyze stories, specifications, and technical design documents and develop detailed test cases and test data to ensure business functionality is thoroughly tested – both automation and manual. Validate ETL workflows, ensuring data integrity, accuracy, and the transformation rules using complex Snowflake SQL queries. Working knowledge on DBT is a plus. Create, execute, and maintain scripts on automation in BDD – Gherkin/Behave, Pytest. Experience in writing database queries (preferably in PostgreSQL, Snowflake, MySQL, RDS). Preparation, review, and update of test cases and relevant test data consistent with system requirements including functional, integration, regression, and UAT testing. Coordinate with cross-team subject matter experts to develop, maintain, and validate test scenarios to the best interest of that POD. Taking ownership of creating and maintaining artifacts on test strategy, BRD, defect count/leakage report, and different quality issues. Collaborate with DevOps/SRE team to integrate test automation into CI/CD pipelines (Jenkins, Rundeck, GitHub, etc.). Guide a team of at least 4 testers, lead by example, institutionalizing best practices in testing processes and automation in agile methodology. Meet with internal stakeholders to review current testing approaches, provide feedback on ways to improve/extend/automate along with data-backed inputs and provisioning senior leadership with metrics consolidation. Maximize the opportunity to excel in an open and recognized work culture. Be a problem solver and a team player. Skills: selenium webdriver,python,bdd,behave,ui testing,postgresql,pytest,etl testing,snowflake,testing,data warehouse testing,agile methodologies,ms power bi,mysql,automation scripting,playwright,sql,dbt,agile,rds,gherkin,etl,automation,informatica,aws Show more Show less
Posted 1 week ago
3.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a Guidewire developer at PwC, you will specialise in developing and customising applications using the Guidewire platform. Guidewire is a software suite that provides insurance companies with tools for policy administration, claims management, and billing. You will be responsible for designing, coding, and testing software solutions that meet the specific needs of insurance organisations. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Total Experience – 3 To 9 Years Education Qualification: BTech/BE/MTech/MS/MCA Preferred Skill Set/Roles and Responsibility - Hands on Experience in Azure Data Bricks and ADF Guidewire. Works with business in identifying detailed analytical and operational reporting/extracts requirements. Experience in Python is a must have. Able to create Microsoft SQL / ETL / SSIS complex queries. Participates in Sprint development, test, and integration activities. Creates detailed source to target mappings. Creates and validates data dictionaries Writes and validates data translation and migration scripts. Communicating with business to gather business requirements. Performs GAP analysis between existing (legacy) and new (GW) data related solutions. Working with Informatica ETL devs. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
15 - 25 Lacs
Chennai
Hybrid
Warm Greetings from SP Staffing!! Role: Informatica Developer Experience Required :5 to 8 yrs Work Location : Chennai Required Skills, Informatica powercenter Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 week ago
50.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Opportunity Job Type: Permanent Application Deadline: 30 June 2025 Job Description Title Analyst Programmer Department AMO (ISS) Production Support Location Gurugram Level 2 About Fidelity International: Fidelity International offers investment solutions and services and retirement expertise to more than 2.56 million customers globally. As a privately-held, purpose-driven company with a 50-year heritage, we think generationally and invest for the long term. Operating in more than 25 locations and with $783.6 billion in total assets, our clients range from central banks, sovereign wealth funds, large corporates, financial institutions, insurers and wealth managers, to private individuals. Our Workplace & Personal Financial Health business provides individuals, advisers and employers with access to world-class investment choices, third-party solutions, administration services and pension guidance. Together with our Investment Solutions & Services business, we invest $567 billion on behalf of our clients. By combining our asset management expertise with our solutions for workplace and personal investing, we work together to build better financial futures. Our clients come from all walks of life and so do we. We are proud of our inclusive culture and encourage applications from the widest mix of talent, whatever your age, gender, ethnicity, sexual orientation, gender identity, social background and more. We are a disability-friendly company and would welcome a conversation with you if you feel you might benefit from any reasonable adjustments to perform to the best of your ability during the recruitment process and beyond. We are committed to being a truly flexible employer, encouraging and trusting our people to perform their role in the way that works best for them, our business, our colleagues and our clients. We offer the maximum possible flexibility over where and when you work for all, considering your role and any local regulations. We call this new approach “dynamic working”. Department Description: AMO (ISS) production support consists of applications like Global Fund Data Repository (GFDR), Product hub, Performance Hub, Product (FRD), Reference Data Service, Transaction Service, Position Service, Frontier, Fund Distribution Service etc. architecture and engineering services that comprises of various Fidelity’s Business Units in the UK and other parts of Europe, Asia and is a strategic area targeted for growth over the coming years. Various key systems have been acting as the key enablers for the business in achieving their goals. The Enterprise portfolio of projects will include a large collection of strategic initiatives as well as tactical ones to support day-to-day operations and strengthen the environment. The support team aims at supporting & maintaining global data warehouse acting as the single source of data for various line of business’s to help them in the MI reporting requirements and data analysis. This source of data is considered as the golden source for distribution data and helps various business groups across organization to take knowledge based decisions. Purpose of the Role: The position is for an Application Programmer in AMO production Support team. The role involves supporting key AMO - Enterprise applications and data marts involving strong PL/SQL and stored procedure knowledge on Oracle database platform. The candidate should have high expertise and core skills of Informatica and UNIX shell script. In addition, hands-on experience with Control-M technologies would be a plus. The successful candidate will be responsible to support for consumption of downstream feeds and applications in varied technologies. This would also involve intensive interaction with the business and other systems groups, so good communications skills and the ability to work under pressure are absolute must. Key Responsibilities: The candidate is expected to display professional ethics in his/her approach to work and exhibit a high level ownership within a demanding working environment. Providing first line of technical support for business critical applications (Principal technologies / applications used include Oracle, UNIX , PaaS, Python, Java and Control-M). Work in the support team alongside data analysts, business analysts, database administrators and business project teams in enhancing and supporting the production services. Help maintain Control-M schedules. Conduct analysis and do bug fixes for production incidents. Carry out technical enhancements as desired. Carry out daily health-check activities involving application checks, system checks, and database checks and related on production systems / servers. The scope of responsibility also covers monitoring business critical batch workloads, real-time / interactive processing, data transfer services, application on-boarding and upgrades, and recovery procedures. Report root cause of the incidents and present ideas on how to prevent the incidents from occurring in future. Ensure adherence to incident and change management processes. Regular engagement with Business & Systems Teams looking to adopt and apply the best practice of Service Support. Prepares and maintains documentation related application support like SOM, Service Card, Support Rota, Knowledge base, etc. Demonstrates continuous effort to improve operations, decrease turnaround times, streamline work processes, and work cooperatively and jointly to provide quality seamless customer service. Responsible for servicing 24x7 support as per support rosters. Flexibility to work in shifts (‘on-demand’ & ‘short-term’ basis), and/or on weekends. Experience and Qualifications Required: Around 2 – 4 years of technical experience in Software / IT industry in Development and Support functions Minimum 2 – 4 years of support experience in Production Support roles Essential Technical skills: At least 2-4 years of Oracle experience with strong focus on SQL. PL/SQL knowledge is good to have. Basic understanding of PaaS technology, Python, Core Java and web services/ REST API. Should have core skills of UNIX shell script. Essential behavioural/operational skills: Ability to apply new skills / additional information acquired in relation to role. Ability to interact with end users/business users. Ability to work closely with cross functional teams including Infrastructure teams/Architects/Business Analysts. Ability to prioritise own activities, work under hard deadlines. Team player with commitment to achieve team goals. Motivated, flexible and with a ‘can do’ approach. Keen to learn and develop proficiency Good communication skills both verbal and written. Delivery and results focused. Good to have technical skills: Hands-on experience with scheduling tools - Control-M would be a definite plus. Experience in informatica is good to have. Experience of any source control tool – SVN would be a plus. Good Operating Systems knowledge and associated commands (UNIX [Linux/AIX], MS Windows). Familiarity in Data Warehouse, Datamart and ODS concepts. Knowledge of essential Software Engineering principles. Knowledge of ITIL practices. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Bangalore, Karnataka , India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary As a Informatica Developer at Gainwell, you can contribute your skills as we harness the power of technology to help our clients improve the health and well-being of the members they serve — a community’s most vulnerable. Connect your passion with purpose, teaming with people who thrive on finding innovative solutions to some of healthcare’s biggest challenges. Here are the details on this position. Your role in our mission Essential Job Functions Establishes and enforces data warehousing standards at the client site to meet client requirements and business needs. Determines data warehousing strategy; selects tools and techniques, including middleware, data cleansing tools, and/or data management systems to provide the solution to data and client issues and other matters of significance and to meet business needs. Conducts research into new data warehouse applications and determines viability for adoption. Evaluates existing subject areas stored in the data warehouse and determines where data should be stored. Assists in several components of the overall architecture of the data warehouse system. Documents tasks for end-user, technical, and managerial review to ensure high quality customer service. Basic Qualifications Bachelor's degree in computer science, mathematics, or related field preferred Five+ years of experience in Informatica - Power Center 10.x, Power Exchange 10.x Design, develop and maintain robust Informatica PowerCenter mappings and workflows that process huge volumes of data across multiple platforms. Solid experience with performance tuning of Informatica processes in partitioned database environment. Advanced SQL, UNIX Shell Scripting skills, CICS and ability to write complex Stored Procedures. Strong knowledge and Hands-on experience in complex data transformations dealing with large volume of data. Experience working with COBOL/mainframe file structures using Power Exchange Design and develop DB2 Procedures and UNIX Shell Scripts for Data Import/Export, Data Conversions, Performance-tuning and work on migration of database jobs in between different environments. Preference to candidates with: Healthcare domain (Medicaid or Medicare). Ability to start at short notice. What you should expect in this role Hybrid Office environment. Fast-paced, challenging and rewarding work environment. Work life balance. Will require late evening work to overlap US work hours. Show more Show less
Posted 1 week ago
130.0 years
6 - 9 Lacs
Hyderābād
On-site
Job Description Manager Senior Data Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Designs, builds, and maintains data pipeline architecture - ingest, process, and publish data for consumption. Batch processes collected data, formats data in an optimized way to bring it analyze-ready Ensures best practices sharing and across the organization Enables delivery of data-analytics projects Develops deep knowledge of the company's supported technology; understands the whole complexity/dependencies between multiple teams, platforms (people, technologies) Communicates intensively with other platform/competencies to comprehend new trends and methodologies being implemented/considered within the company ecosystem Understands the customer and stakeholders business needs/priorities and helps building solutions that support our business goals Establishes and manages the close relationship with customers/stakeholders Has overview of the date engineering market development to be able to come up/explore new ways of delivering pipelines to increase their value/contribution Builds “community of practice” leveraging experience from delivering complex analytics projects Is accountable for ensuring that the team delivers solutions with high quality standards, timeliness, compliance and excellent user experience Contributes to innovative experiments, specifically to idea generation, idea incubation and/or experimentation, identifying tangible and measurable criteria Qualifications: Bachelor’s degree in Computer Science, Data Science, Information Technology, Engineering or a related field. 3+ plus years of experience as a Data Engineer or in a similar role, with a strong portfolio of data projects. 3+ plus years experience SQL skills, with the ability to write and optimize queries for large datasets. 1+ plus years experience and proficiency in Python for data manipulation, automation, and pipeline development. Experience with Databricks including creating notebooks and utilizing Spark for big data processing. Strong experience with data warehousing solution (such as Snowflake), including schema design and performance optimization. Experience with data governance and quality management tools, particularly Collibra DQ. Strong analytical and problem-solving skills, with an attention to detail. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Secondary Language(s) Job Description: In light of the current state of the AHIT commercial data and analytics function, it is imperative to address the limitations posed by relying on multiple HCL contractors who primarily possess ETL/Informatica skills but lack the essential data engineering expertise. This gap hinders the seamless flow of data through the AHIT digital backbone, which is critical for the successful implementation of the JEDI strategy. The future state envisioned by Merck IT emphasizes a strategic shift towards reducing contractor expenditures by developing in-house capabilities. By leveraging the Hyderabad tech center, we can cultivate a skilled team of data engineers who are not only proficient in ETL processes but also possess the advanced data engineering skills necessary to optimize our data infrastructure. This transition will enable us to enhance data accessibility, improve data quality, and facilitate a frictionless data flow, ultimately supporting the overarching goals of the JEDI strategy. Investing in the hiring of dedicated data engineers will yield significant long-term benefits, including reduced operational costs, increased agility in data management, and the ability to innovate and adapt to evolving business needs. By fostering a robust in-house data engineering team, we can ensure that our data capabilities align with Merck's strategic objectives, driving efficiency and effectiveness across the organization Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business, Business, Business Intelligence (BI), Business Management, Contractor Management, Cost Reduction, Database Administration, Database Optimization, Data Engineering, Data Flows, Data Infrastructure, Data Management, Data Modeling, Data Optimization, Data Quality, Data Visualization, Design Applications, Information Management, Management Process, Operating Cost Reduction, Productivity Improvements, Project Engineering, Social Collaboration, Software Development, Software Development Life Cycle (SDLC) {+ 1 more} Preferred Skills: Job Posting End Date: 08/20/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R350683
Posted 1 week ago
130.0 years
4 - 7 Lacs
Hyderābād
Remote
Job Description Manager, Data Visualization Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of the company IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview: A unique opportunity to be part of an Insight & Analytics Data hub for a leading biopharmaceutical company and define a culture that creates a compelling customer experience. Bring your entrepreneurial curiosity and learning spirit into a career of purpose, personal growth, and leadership. We are seeking those who have a passion for using data, analytics, and insights to drive decision-making that will allow us to tackle some of the world's greatest health threats As a Manager in Data Visualization, you will be focused on designing and developing compelling data visualizations solutions to enable actionable insights & facilitate intuitive information consumption for internal business stakeholders. The ideal candidate will demonstrate competency in building user-centric visuals & dashboards that empower stakeholders with data driven insights & decision-making capability. Our Quantitative Sciences team use big data to analyze the safety and efficacy claims of our potential medical breakthroughs. We review the quality and reliability of clinical studies using deep scientific knowledge, statistical analysis, and high-quality data to support decision-making in clinical trials. What will you do in this role: Design & develop user-centric data visualization solutions utilizing complex data sources. Identify & define key business metrics and KPIs in partnership with business stakeholders. Define & develop scalable data models in alignment & support from data engineering & IT teams. Lead UI UX workshops to develop user stories, wireframes & develop intuitive visualizations. Collaborate with data engineering, data science & IT teams to deliver business friendly dashboard & reporting solutions. Apply best practices in data visualization design & continuously improve upon intuitive user experience for business stakeholders. Provide thought leadership and data visualization best practices to the broader Data & Analytics organization. Identify opportunities to apply data visualization technologies to streamline & enhance manual / legacy reporting deliveries. Provide training & coaching to internal stakeholders to enable a self-service operating model. Co-create information governance & apply data privacy best practices to solutions. Continuously innovative on visualization best practices & technologies by reviewing external resources & marketplace. What Should you have: 5 years’ relevant experience in data visualization, infographics, and interactive visual storytelling Working experience and knowledge in Power BI / QLIK / Spotfire / Tableau and other data visualization technologies Working experience and knowledge in ETL process, data modeling techniques & platforms (Alteryx, Informatica, Dataiku, etc.) Experience working with Database technologies (Redshift, Oracle, Snowflake, etc) & data processing languages (SQL, Python, R, etc.) Experience in leveraging and managing third party vendors and contractors. Self-motivation, proactivity, and ability to work independently with minimum direction. Excellent interpersonal and communication skills Excellent organizational skills, with ability to navigate a complex matrix environment and organize/prioritize work efficiently and effectively. Demonstrated ability to collaborate and lead with diverse groups of work colleagues and positively manage ambiguity. Experience in Pharma and or Biotech Industry is a plus. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Remote Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills: Job Posting End Date: 07/9/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R329043
Posted 1 week ago
3.0 - 5.0 years
2 - 7 Lacs
Hyderābād
On-site
Country/Region: IN Requisition ID: 26281 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Technical Lead-Data Engg Description: Area(s) of responsibility We are seeking a skilled Informatica ETL Developer with 3–5 years of experience in ETL and Business Intelligence projects. The ideal candidate will have a strong background in Informatica PowerCenter , a solid understanding of data warehousing concepts , and hands-on experience in SQL, performance tuning , and production support . This role involves designing and maintaining robust ETL pipelines to support digital transformation initiatives for clients in manufacturing, automotive, transportation, and engineering domains. Key Responsibilities: Design, develop, and maintain ETL workflows using Informatica PowerCenter . Troubleshoot and optimize ETL jobs for performance and reliability. Analyze complex data sets and write advanced SQL queries for data validation and transformation. Collaborate with data architects and business analysts to implement data warehousing solutions . Apply SDLC methodologies throughout the ETL development lifecycle. Support production environments by identifying and resolving data and performance issues. Work with Unix shell scripting for job automation and scheduling. Contribute to the design of technical architectures that support digital transformation.
Posted 1 week ago
5.0 years
15 Lacs
Hyderābād
On-site
Experience- 5-10 years JD- Mandatory Skillset: Snowflake, DBT, Data Architecture Design experience in Data Warehouse. Good to have Informatica or any ETL Knowledge or Hands-On Experience Good to have Databricks understanding 5-10 years of IT experience with 2+ years of Data Architecture experience in Data Warehouse,3+ years in Snowflake Responsibilities Design, implement, and manage cloud-based solutions on AWS and Snowflake. Work with stakeholders to gather requirements and design solutions that meet their needs. Develop and execute test plans for new solutions Oversee and design the information architecture for the data warehouse, including all information structures such as staging area, data warehouse, data marts, and operational data stores. Ability to Optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. Deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support Significant experience working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models) Maintain Documentation: Develop and maintain detailed documentation for data solutions and processes. Provide Training: Offer training and leadership to share expertise and best practices with the team Collaborate with the team and provide leadership to the data engineering team, ensuring that data solutions are developed according to best practices Job Type: Full-time Pay: From ₹1,500,000.00 per year Location Type: In-person Schedule: Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your notice period? How many years of experience do you have in Snowflake? How many years of experience do you have in Data Architecture experience in Data Warehouse? What is your current location? Are you ok with the work from office in Hyderabad location? What is your current CTC? What is your expected CTC? Experience: total work: 5 years (Required) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Primary skills:Technology->Data Management - Data Integration->Informatica A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Show more Show less
Posted 1 week ago
5.0 - 10.0 years
4 - 4 Lacs
Gurgaon
On-site
Full Time Gurugram Jun 10 2025 Notice Period: Immediate or Serving Notice Period Experience: 5-10 Years About Bizmetric: Bizmetric is a dynamic and innovative technology solutions company specializing in cutting-edge services in Data Analytics, Cloud Solutions, Artificial Intelligence, and Machine Learning. We help businesses optimize their operations through intelligent automation, data-driven insights, and scalable infrastructure solutions, delivering value-driven results across industries. Why Join Us? Learning & Certification Opportunities: Enhance your professional growth. Comprehensive Medical Coverage and Life Insurance: For your well-being. Flexible Work Environment: Enjoy a 5-day work week. Collaborative Culture: Be part of a fun, innovative workplace. Job Description: Snowflake, SQL, PL/SQL, Data Modeling Join Us: Become part of our dynamic and innovative team and contribute your expertise to deliver cutting-edge web applications using the latest technologies. Apply now and be part of our success story! About Bizmetric Bizmetric, a Microsoft Solution Partner & Oracle Gold Partner & , was founded in 2011 in Houston, Texas, US and in 2015 in Pune, India. It is a fast-paced organization that is marking an exponential growth every quarter. We have also surpassed the geographical boundaries and made our presence in the US, UK, Middle-East & Indian markets. Bizmetric is a pure-play technologically driven company helping customers in the field of Data Science, Advanced Analytics, Cloud and Edge Computing. We are Microsoft, Oracle, Snowflake, Confluent, Informatica partners as well. Our rich & varied experience in Business Intelligence coupled with a market-disrupting solution like Big Data & Data Science is widening our services and solutions. Our incredibly expert professionals in Artificial Intelligence & Machine Learning have exhibited their intellect in high-profile projects. Benefits Unlimited opportunities to learn on our multiple Training Platforms Certifications Reimbursement Flexibility Opportunity to work on multiple technologies Medical Coverage & Life Insurance Company Events and Outings Tech Thursdays and Fun Fridays 5 days working Work-Fun Environment Our Partners Microsoft Solution Partners Aws partners Oracle Gold Partner SAP Partner Cloud computing Snowflake Informatica Partner
Posted 1 week ago
3.0 - 5.0 years
10 - 14 Lacs
Gurugram, Bengaluru, Mumbai (All Areas)
Hybrid
Role & responsibilities : Design,develop, and maintain ETL workflows using Ab Initio. Manage and support critical data pipelines and data sets across complex,high-volume environments. Perform data analysis and troubleshoot issues across Teradata and Oracle data sources. Collaborate with DevOps for CI/CD pipeline integration using Jenkins, and manage deployments in Unix/Linux environments. Participate in Agile ceremonies including stand-ups, sprint planning, and roadmap discussions. Support cloud migration efforts, including potential adoption of Azure,Databricks, and PySparkbased solutions. Contribute to project documentation, metadata management (LDM, PDM), onboarding guides, and SOPs Preferred candidate profile 3 years of experience in data engineering, with proven expertise in ETL development and maintenance. Proficiency with Ab Initio tools (GDE, EME, Control Center). Strong SQL skills, particularly with Oracle or Teradata. Solid experience with Unix/Linux systems and scripting. Familiarity with CI/CD pipelines using Jenkins or similar tools. Strong communication skills and ability to collaborate with cross-functional teams.
Posted 1 week ago
5.0 years
5 - 5 Lacs
Chennai
On-site
Job Information Date Opened 06/09/2025 Job Type Full time Industry Technology Work Experience 5+ years City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600096 Job Description What you’ll be working on: Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Learn something new everyday What we are looking for: Bachelor's or master’s degree in technical or business discipline or related experience; Master's Degree preferred. 4+ years hands-on experience effectively managing data platforms, data tools and/or depth in data management technologies Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with orchestration tools like Airflow. Experience with any of the ETL tools like Talend, Informatica etc. Experience in Data Warehouse solutions like Snowflake,Redshift. Exposure to data visualization tools (Tableau, Sisense, Looker, Metabase etc.) Knowledge of Github, JIRA is a plus. Familiar with data warehouse & data governance Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) is a plus. Requirements Knowledge/Skills/Abilities/Behaviours: A “build-test-measure-improve” mentality and are driven to motivate and lead teams to achieve impactful deliverables Passion for operational efficiency, quantitative performance metrics and process orientation Working knowledge of project planning methodologies, IT standards and guidelines. Customer passion, business focus and the ability to negotiate, facilitate and build consensus. The ability to promote a team environment across a large set of separate agile teams and stakeholders Experience with or knowledge of Agile Software Development methodologies Benefits Work at SquareShift: We offer a stimulating atmosphere where your efforts will have a significant impact on our company’s success. We are a fun, client focussed, results-driven company that centers on developing high quality software, not work schedules and dress codes. We are driven by people who have a passion for technology, innovation and we are committed to continuous improvement. This role excites you to join our team? Apply on the link below!
Posted 1 week ago
4.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a skilled *Data Masking Engineer* with 4-5 years of experience in *SQL Server* and *Redgate tools* to design, implement, and manage data masking solutions. The ideal candidate will ensure sensitive data is protected while maintaining database usability for development, testing, and analytics. The candidate should be ready to relocate to Johannesburg South Africa at the earliest possible. Responsibilities Design and implement *data masking strategies* for SQL Server databases to comply with security and privacy regulations (GDPR, HIPAA, etc.). - Use *Redgate Data Masker* and other tools to anonymize sensitive data while preserving referential integrity. - Develop and maintain masking rules, scripts, and automation workflows for efficient data obfuscation. - Collaborate with *DBAs, developers, and security teams* to identify sensitive data fields and define masking policies. - Validate masked data to ensure consistency, usability, and compliance with business requirements. - Troubleshoot and optimize masking processes to minimize performance impact on production and non-production environments. - Document masking procedures, policies, and best practices for internal teams. - Stay updated with *Redgate tool updates, SQL Server features, and data security trends*. Qualifications: 4-5 years of hands-on experience in SQL Server database development/administration . - Strong expertise in *Redgate Data Masker* or similar data masking tools (e.g., Delphix, Informatica). - Proficiency in *T-SQL, PowerShell, or Python* for scripting and automation. - Knowledge of *data privacy laws (GDPR, CCPA)* and secure data handling practices. - Experience with *SQL Server security features* (Dynamic Data Masking, Always Encrypted, etc.) is a plus. - Familiarity with *DevOps/CI-CD pipelines* for automated masking in development/test environments. - Strong analytical skills to ensure masked data remains realistic for testing. Preferred Qualifications: - Redgate or Microsoft SQL Server certifications. - Experience with *SQL Server Integration Services (SSIS)* or ETL processes. - Knowledge of *cloud databases (Azure SQL, AWS RDS)* and their masking solutions. Show more Show less
Posted 1 week ago
8.0 - 11.0 years
6 - 9 Lacs
Noida
On-site
Snowflake - Senior Technical Lead Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh
On-site
We are looking for a PySpark solutions developer and data engineer who can design and build solutions for one of our Fortune 500 Client programs, which aims towards building a data standardized and curation needs on Hadoop cluster. This is high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights, and integrate with the customers critical systems. Key Responsibilities Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Develop and execute data pipeline testing processes and validate business rules and policies. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, HDFS compression codec. Create and maintain integration and regression testing framework on Jenkins integrated with Bit Bucket and/or GIT repositories. Participate in the agile development process, and document and communicate issues and bugs relative to data standards in scrum meetings. Work collaboratively with onsite and offshore team. Develop & review technical documentation for artifacts delivered. Ability to solve complex data-driven scenarios and triage towards defects and production issues. Ability to learn-unlearn-relearn concepts with an open and analytical mindset. Participate in code release and production deployment. Preferred Qualifications BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university. Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications. Expertise in handling complex large-scale Big Data environments preferably (20Tb+). Minimum 3 years of experience in the following: HIVE, YARN, HDFS. Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities. Ability to build abstracted, modularized reusable code components. Prior experience on ETL tools preferably Informatica PowerCenter is advantageous. Able to quickly adapt and learn. Able to jump into an ambiguous situation and take the lead on resolution. Able to communicate and coordinate across various teams. Are comfortable tackling new challenges and new ways of working Are ready to move from traditional methods and adapt into agile ones Comfortable challenging your peers and leadership team. Can prove yourself quickly and decisively. Excellent communication skills and Good Customer Centricity. Strong Target & High Solution Orientation. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Data Engineer will work closely with clients and the eCS Biometrics team to optimize the elluminate® platform for end-to-end solutions to aggregate, transform, access and report on clinical data throughout the life cycle of a clinical trial. This includes study design in elluminate®, collaboration on specifications, and configuration of the various modules to including Data Central, Clinical Data Analytics and Trial Operational Analytics, Risk-Based Quality Management (RBQM), Statistical Computing Environment (SCE) and Operational Insights. The Data Engineer will be involved in standard ETL activities as well as programming custom listings, visualizations and analytics tools using Mapper and Qlik. The position involves a high level of quality control as well as adherence to standard operation procedures and work instructions and a constant drive towards automation and process improvement. Key Tasks & Responsibilities Design, develop, test, and deploy highly efficient code for supporting SDTM, Custom reports and Visualizations using tools like MS SQL, elluminate® Mapper and Qlik Configure ETL processes to support of the aggregation and standardization of clinical data from various sources including EDC systems, SAS and central laboratory vendors Work with Analytics developers, other team members and clients to review the business requirements and translate them into database objects and visualizations Manage multiple timelines and deliverables (for single or multiple clients) and managing client communications as assigned Provide diagnostic support and fix defects as needed Ensure compliance with eClinical Solutions/industry quality standards, regulations, guidelines, and procedures Other duties as assigned CANDIDATE’S PROFILE Education & Experience 3+ years of professional experience preferred Bachelor's degree or equivalent experience preferred Experience with database/warehouse architecture, design and development preferred Knowledge of various data platforms and warehouses including SQL Server, DB2, Teradata, AWS, Azure, Snowflake, etc. Understanding of Cloud / Hybrid data architecture concepts is a plus Knowledge of clinical trial data is a plus - CDISC ODM, SDTM, or ADAM standards Experience in Pharmaceutical/Biotechnology/Life Science industry is a plus Professional Skills Critical thinking, problem solving and strong initiative Communication and task management skills while working with technical and non-technical teams (both internal to eCS and clients) Must be team oriented with strong collaboration, prioritization, and adaptability skills Excellent knowledge of English; verbal and written communication skills with ability to interact with users and clients providing solutions Excited to learn new tools and product modules and adapt to changing technology and requirements Experience in the Life Sciences industry, CRO / Clinical Trial regulated environment preferred Technical Skills Proficient in SQL, T-SQL, PL/SQL programing Experience in Microsoft Office Applications, specifically MS Project and MS Excel Familiarity with multiple Database Platforms: Oracle, SQL Server, Teradata, DB2 Oracle Familiarity with Data Reporting Tools: QlikSense, QlikView, Spotfire, Tableau, JReview, Business Objects, Cognos, MicroStrategy, IBM DataStage, Informatica, Spark or related Familiarity with other languages and concepts: .NET, C#, Python, R, Java, HTML, SSRS, AWS, Azure, Spark, REST APIs, Big Data, ETL, Data Pipelines, Data Modelling, Data Analytics, BI, Data Warehouse, Data Lake or related Show more Show less
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Design and develop end-to-end Master Data Management solutions using Informatica MDM Cloud Edition or on-prem hybrid setups. Implement match & merge rules, survivorship, hierarchy management, and data stewardship workflows. Configure landing, staging, base objects, mappings, cleanse functions, match rules, and trust/survivorship rules. Integrate MDM with cloud data lakes/warehouses (e.g., Snowflake, Redshift, Synapse) and business applications. Design batch and real-time integration using Informatica Cloud (IICS), APIs, or messaging platforms. Work closely with data architects and business analysts to define MDM data domains (e.g., Customer, Product, Vendor). Ensure data governance, quality, lineage, and compliance standards are followed. Provide production support and enhancements to existing MDM solutions. Create and maintain technical documentation, test cases, and deployment artifacts. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The informatica job market in India is thriving with numerous opportunities for skilled professionals in this field. Companies across various industries are actively hiring informatica experts to manage and optimize their data integration and data quality processes.
The average salary range for informatica professionals in India varies based on experience and expertise: - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
A typical career progression in the informatica field may include roles such as: - Junior Developer - Informatica Developer - Senior Developer - Informatica Tech Lead - Informatica Architect
In addition to informatica expertise, professionals in this field are often expected to have skills in: - SQL - Data warehousing - ETL tools - Data modeling - Data analysis
As you prepare for informatica job opportunities in India, make sure to enhance your skills, stay updated with the latest trends in data integration, and approach interviews with confidence. With the right knowledge and expertise, you can excel in the informatica field and secure rewarding career opportunities. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane