Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
6 - 7 Lacs
Hyderābād
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Roles & Responsibilities Develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions for HR stakeholders Partner with senior analysts build visualizations to communicate insights and recommendations to stakeholders at various levels of the organization Partner with HR senior analysts to implement statistical models, decision support models, and optimization techniques to solve complex business problems. Collaborate with cross-functional teams to gather/analyse data, define problem statements and identify KPIs for decision-making Perform and document data analysis, data validation, and data mapping/design. Collaborate with HR stakeholders to understand business objectives and translate them into projects and actionable recommendations Stay up to date with industry trends, emerging methodologies, and best practices related to reporting analytics / visualization optimization and decision support The HR data Analyst will play a critical role in ensuring the availability, integrity of HR data to drive informed decision-making. Skills and competencies Strong analytical thinking and problem-solving skills, with working knowledge of statistical analysis, optimization techniques, and decision support models. Ability to present complex information to non-technical stakeholders in a clear and concise manner; skilled in creating relevant and engaging PowerPoint presentations. Proficiency in data analysis techniques, including the use of Tableau, ETL tools (Python, R, Domino), and statistical software packages. Advanced skills in Power BI, Power Query, DAX, and data visualization best practices. Experience with data modelling, ETL processes, and connecting to various data sources. Solid understanding of SQL and relational databases. Exceptional attention to detail, with the ability to proactively detect data anomalies and ensure data accuracy. Ability to work collaboratively in cross-functional teams and manage multiple projects simultaneously. Strong capability to work with large datasets, ensuring the accuracy and reliability of analyses. Strong business acumen, with the ability to translate analytical findings into actionable insights and recommendations. Working knowledge of data modelling to support analytics needs. Experience conducting thorough Exploratory Data Analysis (EDA) to summarize, visualize, and validate data quality and trends. Ability to apply foundational data science or basic machine learning techniques (such as regression, clustering, or forecasting) when appropriate. Experience: Bachelor's or master's degree in a relevant field such as Statistics, Mathematics, Economics, Operations Research or a related discipline. Minimum of 3+ years of total relevant experience Business experience with visualization tools (e.g., PowerBI) Experience with data querying languages (e.g., SQL), scripting languages (Python) Problem-solving skills with understanding and practical experience across most Statistical Modelling and Machine Learning Techniques. Only academic knowledge is also acceptable. Ability to handle, and maintain the confidentiality of highly sensitive information Experience initiating and completing analytical projects with minimal guidance Experience communicating results of analysis to using compelling and persuasive oral and written storytelling techniques Hands-on experience working with large datasets, statistical software packages (e.g., R, Python), and data visualization tools such as Tableau and Power BI. Experience with ETL processes, writing complex SQL queries, and data manipulation techniques. Experience in HR analytics a nice to have If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 6 days ago
2.0 - 4.0 years
6 - 9 Lacs
Hyderābād
On-site
Summary As a Data Analyst, you will be responsible for Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. About the Role Location – Hyderabad #LI Hybrid About the Role: As a Data Analyst, you will be responsible for Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. Key Responsibilities: Design, develop, and maintain efficient and scalable data pipelines for data ingestion, transformation, and storage. Collaborate with cross-functional teams, including data analysts, business analyst and BI, to understand data requirements and design appropriate solutions. Build and maintain data infrastructure in the cloud, ensuring high availability, scalability, and security. Write clean, efficient, and reusable code in scripting languages, such as Python or Scala, to automate data workflows and ETL processes. Implement real-time and batch data processing solutions using streaming technologies like Apache Kafka, Apache Flink, or Apache Spark. Perform data quality checks and ensure data integrity across different data sources and systems. Optimize data pipelines for performance and efficiency, identifying and resolving bottlenecks and performance issues. Collaborate with DevOps teams to deploy, automate, and maintain data platforms and tools. Stay up to date with industry trends, best practices, and emerging technologies in data engineering, scripting, streaming data, and cloud technologies Essential Requirements: Bachelor's or Master's degree in Computer Science, Information Systems, or a related field with an overall experience of 2-4 Years. Proven experience as a Data Engineer or similar role, with a focus on scripting, streaming data pipelines, and cloud technologies like AWS, GCP or Azure. Strong programming and scripting skills in languages like Python, Scala, or SQL. Experience with cloud-based data technologies, such as AWS, Azure, or Google Cloud Platform. Hands-on experience with streaming technologies, such as AWS Streamsets, Apache Kafka, Apache Flink, or Apache Spark Streaming. Strong experience with Snowflake (Required) Proficiency in working with big data frameworks and tools, such as Hadoop, Hive, or HBase. Knowledge of SQL and experience with relational and NoSQL databases. Familiarity with data modelling and schema design principles. Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment. Excellent communication and teamwork skills. Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Accessibility and accommodation: Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to diversityandincl.india@novartis.com and let us know the nature of your request and your contact information. Please include the job requisition number in your message Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division US Business Unit Universal Hierarchy Node Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Marketing Job Type Full time Employment Type Regular Shift Work No Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. Please include the job requisition number in your message. Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve.
Posted 6 days ago
3.0 years
6 - 7 Lacs
Hyderābād
On-site
Job Title: Data Engineer Total Experience: 3+ Years Location: Hyderabad Job Type: Contract Work Mode: On-site Notice Period: Immediate to 15 Days Work Timings: Monday to Friday, 10 am to 7 pm (IST) Interview Process Level 1: HR Screening (Personality Assessment) Level 2: Technical Round Level 3: Final Round (Note: The interview levels may vary) Company Overview Compileinfy Technology Solutions Pvt. Ltd. is a fast-growing IT services and consulting company delivering tailored digital solutions across industries. At Compileinfy, we promote a culture of ownership, critical thinking, and technological excellence. Job Summary We are seeking a highly motivated Data Engineer to join our expanding Data & AI team. This role offers the opportunity to design and develop robust, scalable data pipelines and infrastructure, ensuring the delivery of high-quality, timely, and accessible data throughout the organization. As a Data Engineer, you will collaborate across teams to build and optimize data solutions that support analytics, reporting, and business operations. The ideal candidate combines deep technical expertise, strong communication, and a drive for continuous improvement. Who You Are: Experienced in designing and building data pipelines for ingestion, transformation, and loading (ETL/ELT) of data from diverse sources to data warehouses or lakes. Proficient in SQL and at least one programming language, such as Python, Java, or Scala. Skilled at working with both relational databases (e.g., PostgreSQL, MySQL) and big data platforms (e.g., Hadoop, Spark, Hive, EMR). Competent in cloud environments (AWS, GCP, Azure), data lake, and data warehouse solutions. Comfortable optimizing and managing the quality, reliability, and timeliness of data flows. Ability to translate business requirements into technical specifications and collaborate effectively with stakeholders, including data scientists, analysts, and engineers. Detail-oriented, with strong documentation skills and a commitment to data governance, security, and compliance. Proactive, agile, and adaptable to a fast-paced environment with evolving business needs. What You Will Do: Design, build, and manage scalable ETL/ELT pipelines to ingest, transform, and deliver data efficiently from diverse sources to centralized repositories such as lakes or warehouses. Implement validation, monitoring, and cleansing procedures to ensure data consistency, integrity, and adherence to organizational standards. Develop and maintain efficient database architectures, optimize data storage, and streamline data integration flows for business intelligence and analytics. Work closely with data scientists, analysts, and business users to gather requirements and deliver tailored data solutions supporting business objectives. Document data models, dictionaries, pipeline architectures, and data flows to ensure transparency and knowledge sharing. Implement and enforce data security and privacy measures, ensuring compliance with regulatory requirements and best practices. Monitor, troubleshoot, and resolve issues in data pipelines and infrastructure to maintain high availability and performance. Preferred Qualifications: Bachelor’s or higher degree in Computer Science, Information Technology, Engineering, or a related field. 3-4years of experience in data engineering, ETL development, or related areas. Strong SQL and data modeling expertise with hands-on experience in data warehousing or business intelligence projects. Familiarity with AWS data integration tools (e.g., Glue, Athena), messaging/streaming platforms (e.g., Kafka, AWS MSK), and big data tools (Spark, Databricks). Proficiency with version control, testing, and deployment tools for maintaining code and ensuring best practices. Experience in managing data security, quality, and operational support in a production environment. What You Deliver Comprehensive data delivery documentation (data dictionary, mapping documents, models). Optimized, reliable data pipelines and infrastructure supporting the organization’s analytics and reporting needs. Operations support and timely resolution of data-related issues aligned with service level agreements. Interdependencies / Internal Engagement Actively engage with cross-functional teams to align on requirements, resolve issues, and drive improvements in data delivery, architecture, and business impact. Become a trusted partner in fostering a data-centric culture and ensuring the long-term scalability and integrity of our data ecosystem Why Join Us? At Compileinfy, we value innovation, collaboration, and professional growth. You'll have the opportunity to work on exciting, high-impact projects and be part of a team that embraces cutting-edge technologies. We provide continuous learning and career advancement opportunities in a dynamic, inclusive environment. Perks and Benefits Competitive salary and benefits package Flexible work environment Opportunities for professional development and training A supportive and collaborative team culture Application Process Submit your resume with the subject line: “Data Engineer Application – [Your Name]” to recruitmentdesk@compileinfy.com Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: ₹600,000.00 - ₹700,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person
Posted 6 days ago
1.0 - 4.0 years
6 - 9 Lacs
Hyderābād
On-site
Job description Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be. HSBC is one of the largest banking and financial services organizations in the world, with operations in 62 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of DECISION SCIENCE JUNIOR ANALYST Principal responsibilities To support the Business by providing vital input for strategic planning by the senior management which enables effective decision making along with addressing unforeseen challenges. The team leverages the best of data and analytics capabilities to enable smarter decisions and drive profitable growth. The team supports various domains ranging from Regulatory, Operations, Procurement, Human Resources, and Financial Crime Risk. It provides support to various business groups and the job involves data analysis, model and strategy development & implementation, Business Intelligence, reporting and data management The team addresses range of business problems which cover areas of business growth, improving customer experience, limiting risk exposure, capital quantification, enhancing internal business processes etc. Proactively identify key emerging compliance risks across all RC categories and interface appropriately with other RC teams and senior management. To provide greater understanding of the potential impact and associated consequences / failings of significant new or emerging risks. & provide innovative and effective solutions based on SME knowledge that assists the Business / Function. Proposing, managing and tracking the resolution of subsequent risk management actions. Lead cross-functional projects using advanced data modelling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Against this period of considerable regulatory change and development, and as regulators develop their own understanding of compliance risk management, the role holder must maintain a strong knowledge and understanding of regulatory development and the evolution of the compliance risk framework, risk appetite and risk assessment methodology. Deliver repeatable and scalable analytics through the semi-automation of L1 Financial Crime Risk and Regulatory Compliance Risk Assurance controls testing. Here, Compliance Assurance will develop and run analytics on data sets which will contain personal information such as customer and employee data. Requirements Bachelor’s degree from reputed university in statistics, economics or any other quantitative fields. Fresher with educational background relevant in Data Science or certified in Data science courses 1-4 years of Experience in the field of Automation & Analytics Worked on Proof of Concept or Case study solving complex business problems using data Strong analytical skills with business analysis experience or equivalent. Basic knowledge and understanding of financial-services/ banking-operations is a good to have. Delivery focused, demonstrating an ability to work under pressure and within tight deadlines Basic knowledge of working in Python and other Data Science Tools & in visualization tools such as QlikSense/Other visualization tools. Experience in SQL/ETL tools is an added advantage. Understanding of big data tools: Teradata, Hadoop, etc & adopting cloud technologies like GCP/AWS/Azure is good to have Experience in data science and other machine learning algorithms (For e.g.- Regression, Classification) is an added advantage Basic knowledge in Data Engineering skills – Building data pipelines using modern tools / libraries (Spark or similar). You’ll achieve more at HSBC HSBC is an equal opportunity employer committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and, opportunities to grow within an inclusive and diverse environment. We encourage applications from all suitably qualified persons irrespective of, but not limited to, their gender or genetic information, sexual orientation, ethnicity, religion, social status, medical care leave requirements, political affiliation, people with disabilities, color, national origin, veteran status, etc., We consider all applications based on merit and suitability to the role.” Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. ***Issued By HSBC Electronic Data Processing (India) Private LTD***
Posted 6 days ago
7.0 years
6 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 2 Openings Trivandrum Role description Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 6 days ago
9.0 years
5 - 10 Lacs
Thiruvananthapuram
On-site
9 - 12 Years 1 Opening Trivandrum Role description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 6 days ago
7.0 years
21 Lacs
Gurgaon
On-site
Job Title: Data Engineer Location: Gurgaon (Onsite) Experience: 7+ Years Employment Type: Contract 6 month Job Description: We are seeking a highly experienced Data Engineer with a strong background in building scalable data solutions using Azure/AWS Databricks , Scala/Python , and Big Data technologies . The ideal candidate should have a solid understanding of data pipeline design, optimization, and cloud-based deployments. Key Responsibilities: Design and build data pipelines and architectures on Azure or AWS Optimize Spark queries and Databricks workloads Manage structured/unstructured data using best practices Implement scalable ETL processes with tools like Airflow, Kafka, and Flume Collaborate with cross-functional teams to understand and deliver data solutions Required Skills: Azure/AWS Databricks Python / Scala / PySpark SQL, RDBMS Hive / HBase / Impala / Parquet Kafka, Flume, Sqoop, Airflow Strong troubleshooting and performance tuning in Spark Qualifications: Bachelor’s degree in IT, Computer Science, Software Engineering, or related Minimum 7+ years of experience in Data Engineering/Analytics Apply Now if you're looking to join a dynamic team working with cutting-edge data technologies! Job Type: Contractual / Temporary Contract length: 6 months Pay: From ₹180,000.00 per month Work Location: In person
Posted 6 days ago
3.0 - 5.0 years
5 - 7 Lacs
Gurgaon
On-site
Key Responsibilities · Manage and maintain Microsoft SQL Server databases (2016 and later) across development, UAT, and production environments. · Monitor and improve database performance using Query Store, Extended Events, and Dynamic Management Views (DMVs). · Design and maintain indexes, partitioning strategies, and statistics to ensure optimal performance. · Develop and maintain T-SQL scripts, views, stored procedures, and triggers. · Implement robust backup and recovery solutions using native SQL Server tools and third-party backup tools (if applicable). · Ensure business continuity through high-availability configurations such as Always On Availability Groups, Log Shipping, or Failover Clustering. · Perform database capacity planning and forecast growth requirements. · Ensure SQL Server security by managing logins, roles, permissions, and encryption features like TDE. · Collaborate with application developers for schema design, indexing strategies, and performance optimization. · Handle deployments, patching, and version upgrades in a controlled and documented manner. · Maintain clear documentation of database processes, configurations, and security policies. Required Skills & Qualifications · Bachelor’s degree in Computer Science, Engineering, or related field. · 3–5 years of solid experience with Microsoft SQL Server (2016 or later). · Strong command of T-SQL including query optimization, joins, CTEs, window functions, and error handling. · Proficient in interpreting execution plans, optimizing long-running queries, and using indexing effectively. · Understanding of SQL Server internals such as page allocation, buffer pool, and lock escalation. · Hands-on experience with backup/restore strategies and consistency checks (DBCC CHECKDB). · Experience with SQL Server Agent Jobs, alerts, and automation scripts (PowerShell or T-SQL). · Ability to configure and manage SQL Server high-availability features. · Exposure to tools like Redgate SQL Monitor, SolarWinds DPA, or similar is a plus. Nice to Have · Exposure to Azure SQL Database or cloud-hosted SQL Server infrastructure. · Basic understanding of ETL workflows using SSIS. · Microsoft Certification: MCSA / Azure Database Administrator Associate or equivalent. · Experience with database deployments in CI/CD pipelines. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Provident Fund Education: Bachelor's (Required) Experience: Microsoft SQL Server: 3 years (Required) Location: Gurugram, Haryana (Required) Work Location: In person
Posted 6 days ago
0 years
0 Lacs
South Dum-Dum, West Bengal, India
On-site
Report discriminatory job ad to TAFEP Roles & Responsibilities Interested applicants are invited to apply directly at the NUS Career Portal Your application will be processed only if you apply via NUS Career Portal We regret that only shortlisted candidates will be notified. Job Description To join the team at the Office of Data and Intelligence at the National University of Singapore (NUS) with the collection, analysis, visualization of data and presentation of analytical results for regular and ad-hoc reporting purposes to meet multiple information and strategic requirements for the various stakeholders of the university. The successful candidate will join a team of other data analysts and data scientists to engage in the following activities: Create detailed reports on data sources, methodology, analytical techniques, analytical results and insights for stakeholders. Develop visualisations and presentations for dissemination of analytical results and derivation of actionable insight. Assist other units in the university with enhancing their data culture and capabilities. Develop data repositories and system to enable key stakeholders to access regular reports. Requirements Successful candidate should have the following qualifications and capabilities: Bachelor degree in Data Analytics, Applied Statistics or Mathematics, Computer Science or another quantitative field, but higher qualifications (Master’s or PhD) is preferred. Excellent knowledge of at least one of the following scripting/programming languages: Python, R, Java, C++, C#.Net, SQL. Proficiency in using Business Intelligence tools (e.g. Power BI, Tableau or Qlik Sense) Experience in applying machine learning techniques and designing algorithms that are scalable and production-grade. Knowledge of database, ETL and data API concepts. Strong analytical, problem-solving skills and critical thinking with the ability to articulate ideas with data. Excellent written and verbal communication skills for coordinating across teams.
Posted 6 days ago
3.0 years
0 Lacs
Sabzi Mandi
On-site
Payroll - Quantuam asia Client - NIC Location - Delhi We are looking for profiles Elastic Search 3+yrs for ICJS & e-Prison Project, MHA Informatics - NIC DELHI. Experience: · Minimum of 3+ years of experience working with Elasticsearch in a production environment. · Experience with distributed systems, big data, and search technologies is highly desirable. Skills: · Design, implement, and manage Elasticsearch clusters, ensuring optimal performance, scalability, and reliability · Configure and maintain Elasticsearch index mappings, settings, and lifecycle management. Create and maintain comprehensive documentation for Elasticsearch setups, configurations, and best practices. Monitor cluster health, performance, and capacity planning to ensure high availability. Create and maintain comprehensive documentation for Elasticsearch setups, configurations, and best practices. Stay updated with the latest developments in Elasticsearch and related technologies and share knowledge with the team. Manage the lifecycle of indexed data, including rollovers, snapshots, and retention policies In-depth knowledge of Elasticsearch, including cluster management, indexing, search optimization, and security. Proficiency in data ingestion tools like Logstash, Beats, and other ETL pipelines. Develop and implement data ingestion pipelines using tools such as Logstash, Beats, or custom scripts to ingest structured and unstructured data. Strong understanding of JSON, REST APIs, and data modeling. Experience with Linux/Unix systems and scripting languages (e.g., Bash, Python). Familiarity with monitoring tools like hashtag#Kibana, hashtag#Grafana, or hashtag#Prometheus. Dear candidate Payroll - Quantum Asia Client - NIC PHP Programmers for TNRD Projects. 1) Asst. Programmers with at least 1-3 Years Experience in PHP, Javascript, HTML5, CSS3, Postgres SQL, Bootstrap is mandatory. 2) programmers atleast 3-6 Years Experience in PHP, Javascript, HTML5, Postgres SQL, CSS3, Bootstrap is mandatory. The qualifications must be in NICSI Norms (M. Sc CS, M. Sc IT, M.C.A, B.E. CSE, B.Tech IT and BE ECE). Kindly share this details Current company: Total exp : Rel exp : Location: Current ctc : Expt ctc : Notice : Last working day: Thanks & Regards Divya 9360417524 (whatsapp) Job Type: Full-time
Posted 6 days ago
3.0 years
0 Lacs
Guwahati, Assam, India
On-site
We are seeking a highly skilled Software Engineer with strong Python expertise and a solid understanding of data engineering principles to join our team. The ideal candidate will work on developing and optimizing scalable applications and data workflows, integrating diverse data sources, and supporting the development of data-driven products. This role requires hands-on experience in software development, data modeling, ETL/ELT pipelines, APIs, and cloud-based data systems. You will collaborate closely with product, data, and engineering teams to build high-quality, maintainable, and efficient solutions that support analytics, machine learning, and business intelligence initiatives. Roles and Responsibilities Software Development Design, develop, and maintain Python-based applications, APIs, and microservices with a strong focus on performance, scalability, and reliability. Write clean, modular, and testable code following best software engineering practices. Participate in code reviews, debugging, and optimization of existing applications. Integrate third-party APIs and services as required for application features or data ingestion. Data Engineering Build and optimize data pipelines (ETL/ELT) for ingesting, transforming, and storing structured and unstructured data. Work with relational and non-relational databases, ensuring efficient query performance and data integrity. Collaborate with the analytics and ML teams to ensure data availability, quality, and accessibility for downstream use cases. Implement data modeling, schema design, and version control for data pipelines. Cloud & Infrastructure Deploy and manage solutions on cloud platforms (AWS/Azure/GCP) using services such as S3, Lambda, Glue, BigQuery, or Snowflake. Implement CI/CD pipelines and participate in DevOps practices for automated testing and deployment. Monitor and optimize application and data pipeline performance using observability tools. Collaboration & Strategy Work cross-functionally with software engineers, data scientists, analysts, and product managers to understand requirements and translate them into technical solutions. Provide technical guidance and mentorship to junior developers and data engineers as needed. Document architecture, code, and processes to ensure maintainability and knowledge sharing. Required Skills Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. 3+ years of experience in Python software development. Strong knowledge of data structures, algorithms, and object-oriented programming. Hands-on experience in building data pipelines (Airflow, Luigi, Prefect, or custom ETL frameworks). Proficiency with SQL and database systems (PostgreSQL, MySQL, MongoDB, etc.). Experience with cloud services (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Familiarity with message queues/streaming platforms (Kafka, Kinesis, RabbitMQ) is a plus. Strong understanding of APIs, RESTful services, and microservice architectures. Knowledge of CI/CD pipelines, Git, and testing frameworks (PyTest, UnitTest). APPLY THROUGH THIS LINK Application link- https://forms.gle/WedXcaM6obARcLQS6
Posted 6 days ago
0 years
1 - 1 Lacs
Durg
On-site
Job Description (JD) – Data AnalystResponsibilities: Collect, clean, and analyze data from multiple sources. Develop and maintain dashboards and reports using tools like Power BI, Tableau, or Excel. Identify trends, patterns, and anomalies in datasets. Collaborate with cross-functional teams to define and track KPIs. Translate business questions into data-driven answers. Automate data collection and analysis processes where possible. Provide actionable insights and recommendations to stakeholders. Document processes and maintain data integrity standards. Required Skills & Qualifications: Bachelor's degree in Statistics, Mathematics, Computer Science, Economics, or a related field. Proficient in SQL, Excel, and at least one programming language (e.g., Python or R). Experience with data visualization tools (Tableau, Power BI, Looker). Strong statistical and analytical thinking skills. Ability to communicate complex data in a simple, actionable way. Familiarity with databases, data warehousing, and ETL processes. Preferred Qualifications: Experience with cloud data platforms (AWS, GCP, Azure). Knowledge of machine learning or predictive analytics is a plus. Familiarity with version control systems like Git. Job Type: Full-time Pay: ₹13,000.00 - ₹15,000.00 per month Benefits: Provident Fund Work Location: In person
Posted 6 days ago
6.0 years
10 - 15 Lacs
India
On-site
Position: Lead Data Engineer Location: Chennai Work Hours: 12:00 PM – 9:00 PM IST (US Business Hours) Availability: Immediate Experience: 6+ Years About the Company: Ignitho Inc. is a leading AI and data engineering company with a global presence, including offices in the US, UK, India, and Costa Rica. Visit our website to learn more about our work and culture: www.ignitho.com. Ignitho is a portfolio company of Nuivio Ventures Inc., a venture builder dedicated to developing Enterprise AI product companies across various domains, including AI, Data Engineering, and IoT. Learn more about Nuivio at: www.nuivio.com. Job Summary We are looking for a highly skilled Lead Data Engineer to drive the delivery of data warehousing, ETL, and business intelligence solutions. This role involves leading cloud data migrations, building scalable data pipelines, and serving as the primary liaison for client engagements. Roles & Responsibilities Lead end-to-end delivery of data warehousing and BI solutions. Act as SPOC for project engagements with cross-functional and international teams. Design and implement batch, near real-time, and real-time data pipelines. Build scalable data platforms using AWS (Lambda, Glue, Athena, S3). Develop and optimise SQL scripts, Python code, and shell scripts for automation. Translate business requirements into actionable data insights and reports. Manage Agile project execution , ensuring timely and high-quality deliverables. Perform root cause analysis, performance tuning, and data profiling. Lead legacy data migration to modern cloud platforms. Collaborate on data modelling and BI solution design. Preferred Skills Strong Python programming for ETL, APIs, and AWS Lambda. Advanced SQL scripting and performance tuning. Proficiency in Unix/Linux shell scripting. Hands-on experience with AWS services ( Lambda, Glue, Athena, S3, CloudWatch ). Familiarity with GitLab, Bitbucket, or CircleCI . Experience in digital marketing data environments is a plus. Knowledge of Azure/GCP and tools like Informatica is preferred. Exposure to real-time streaming and data lake architectures. Qualifications Bachelor’s degree in computer science, MCA, or related field. 6+ years of experience in data engineering and BI. At least 5 years of project leadership experience. AWS/Data Warehousing certifications are a plus. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,500,000.00 per year Schedule: Monday to Friday US shift Application Question(s): Current CTC Expected CTC Immediately available to join? Work Location: In person Application Deadline: 30/08/2025
Posted 6 days ago
7.0 years
2 - 4 Lacs
Chennai
On-site
Req ID: 323206 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a AWS .NET Developer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). AWS .NET Developer Minimum of 7 years in Software Engineering and Development Strong development background in .NET in AWS Must have skills: .NET, SQL Goot to have skills: ETL (IICS/SSIS) Experience in Production Support About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 6 days ago
3.0 - 15.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – ODI Staff, Senior The opportunity We are looking for Staff and Senior level candidates with a good working experience in Data warehousing, Data Integration using ETL tool Oracle Data Integrator (ODI), Oracle SQL and PL/SQL. Knowledge on other ETL tools and databases is an added advantage. As a problem-solver with the keen ability to diagnose a client’s unique needs, one should be able to see the gap between where clients currently are and where they need to be. The candidate should be capable of creating a blueprint to help clients achieve their end goal. Your key responsibilities: Overall having 3-15 years of ETL Lead / developer experience and a minimum of 2-3 Years’ experience in Oracle Data Integrator (ODI). Minimum 2-3 end to end DWH Implementation experience. Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization, workflow design, etc. Expertise in the Oracle ODI toolset and Oracle PL/SQL, ODI Knowledge of ODI Master and work repository Knowledge of data modelling and ETL design Setting up topology, building objects in Designer, Monitoring Operator, different type of KM’s, Agents etc Packaging components, database operations like Aggregate pivot, union etc. using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects Design and develop complex mappings, Process Flows and ETL scripts Must be well versed and hands-on in using and customizing Knowledge Modules (KM) Experience of performance tuning of mappings Ability to design ETL unit test cases and debug ETL Mappings Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Experience on Error recycling / management using ODI, PL/SQL Expertise in database development (SQL/ PLSQL) for PL/SQL based applications. Experience of creating PL/SQL packages, procedures, Functions, Triggers, views, Mat Views and exception handling for retrieving, manipulating, checking and migrating complex datasets in oracle Experience in Data Migration using Sql loader, import/export Experience in SQL tuning and optimization using explain plan and Sql trace files. Strong knowledge of ELT/ETL concepts, design and coding Partitioning and Indexing strategy for optimal performance Must have Good verbal and written communication in English, Strong interpersonal, analytical and problem-solving abilities. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High- and Low-level design documents. Experience in understanding complex source system data structures preferably in Financial services (preferred) Industry Ability to work with minimal guidance or supervision in a time critical environment. Education: BTech / MTech / MCA / MBA EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 6 days ago
5.0 years
4 - 8 Lacs
Chennai
On-site
- 5+ years of SQL experience - Experience programming to extract, transform and clean large (multi-TB) data sets - Experience with theory and practice of design of experiments and statistical analysis of results - Experience with AWS technologies - Experience in scripting for automation (e.g. Python) and advanced SQL skills. - Experience with theory and practice of information retrieval, data science, machine learning and data mining Key Responsibilities: Own and develop advanced substitutability analysis frameworks combining text-based and visual matching capabilities Drive technical improvements to product matching models to enhance accuracy beyond current 79% in structured categories Design category-specific matching criteria, particularly for complex categories like fashion where accuracy is currently at 20% Develop and implement advanced image matching techniques including pattern recognition, style segmentation, and texture analysis Create performance measurement frameworks to evaluate product matching accuracy across different product categories Partner with multiple data and analytics teams to integrate various data signals Provide technical expertise in scaling substitutability analysis across 2000 different product types in multiple markets Technical Requirements: Deep expertise in developing hierarchical matching systems Strong background in image processing and visual similarity algorithms Experience with large-scale data analysis and model performance optimization Ability to work with multiple data sources and complex matching criteria Key job responsibilities Success Metrics: Drive improvement in substitutability accuracy to >70% across all categories Reduce manual analysis time for product matching identification Successfully implement enhanced visual matching capabilities Create scalable solutions for multi-market implementation A day in the life Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for business intelligence analytics. Implement data structures using best practices in data modeling, ETL/ELT processes, SQL, Oracle, and OLAP technologies. Provide on-line reporting and analysis using OBIEE business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Analyze source data systems and drive best practices in source teams. Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance. Produce comprehensive, usable dataset documentation and metadata. Evaluate and make decisions around dataset implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools. Mentor junior Business Research Analysts. About the team The RBS-Availability program includes Selection Addition (where new Head-Selections are added based on gaps identified by Selection Monitoring-SM), Buyability (ensuring new HS additions are buyable and recovering established ASINs that became non-buyable), SoROOS (rectify defects for sourceble out-of-stock ASINs ) Glance View Speed (offering ASINs with the best promise speed based on Store/Channel/FC level nuances), Emerging MPs, ASIN Productivity (To have every ASINS actual contribution profit to meet or exceed the estimate). The North-Star of the Availability program is to "Ensure all customer-relevant (HS) ASINs are available in Amazon Stores with guaranteed delivery promise at an optimal speed." To achieve this, we collaborate with SM, SCOT, Retail Selection, Category, and US-ACES to identify overall opportunities, defect drivers, and ingress across forecasting, sourcing, procurability, and availability systems, fixing them through UDE/Tech-based solutions. Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 6 days ago
0 years
0 Lacs
India
On-site
Language: English Work Type: FTE Job Description: Reporting into Team Lead, some of the other responsibilities you’ll have: Review requirements, specifications and technical design documents to provide timely and meaningful feedback Collaborating closely with cross functional teams to identify the data quality issues Developing and implementing data validation processes to maintain data integrity as required Create well-structured test plans and test design Closely working with developers, product owners, business analyst to drive the QA process, contributing ideas and process improvement Design, develop and execute automation scripts using tools Participate in documentation of team QA methodologies and technical findings for future reference Work with Team Lead in definition and implementation of new QA Tools Qualifications What would make me a good candidate? Strong testing background in Data warehouse, SSIS packages and reporting. Extensive experience in data migration and data warehouse. Comprehensive technical testing experience in SQL is essential. Experience in various data and reporting tools such SSRS, Power BI, Data Extractors or similar tools would be advantageous. Strong experience in design, implementation, and execution of test strategies and solutions (exposure to automation is a big plus). Collaborate with a team and help them shift testing to the left. Experience with Continuous Integration / Continuous Delivery (such as Team City, Octopus, Azure Devops) Experience with tools like JIRA and Software Development and Collaboration tools such as Confluence. Experience working in an Agile environment Job Type: Full-time
Posted 6 days ago
0 years
7 - 8 Lacs
India
On-site
ETL Tester/Developer Exp:- 5+ yrs Location: Chennai/bangalore Job Description: Reporting into Team Lead, some of the other responsibilities you’ll have: Review requirements, specifications and technical design documents to provide timely and meaningful feedback Collaborating closely with cross functional teams to identify the data quality issues Developing and implementing data validation processes to maintain data integrity as required Create well-structured test plans and test design Closely working with developers, product owners, business analyst to drive the QA process, contributing ideas and process improvement Design, develop and execute automation scripts using tools Participate in documentation of team QA methodologies and technical findings for future reference Work with Team Lead in definition and implementation of new QA Tools Qualifications What would make me a good candidate? Strong testing background in Data warehouse, SSIS packages and reporting. Extensive experience in data migration and data warehouse. Comprehensive technical testing experience in SQL is essential. Experience in various data and reporting tools such SSRS, Power BI, Data Extractors or similar tools would be advantageous. Strong experience in design, implementation, and execution of test strategies and solutions (exposure to automation is a big plus). Collaborate with a team and help them shift testing to the left. Experience with Continuous Integration / Continuous Delivery (such as Team City, Octopus, Azure Devops) Experience with tools like JIRA and Software Development and Collaboration tools such as Confluence. Experience working in an Agile environment Job Type: Freelance Contract length: 12 months Pay: ₹65,000.00 - ₹70,000.00 per month Ability to commute/relocate: Chennai G.P.O, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): This role is based on freelance or B2B contract for one year with Sharp Brains. Are you agree to work under this type of contract?
Posted 6 days ago
2.0 years
20 - 24 Lacs
Chennai
On-site
Lead Assistant Manager EXL/LAM/1435996 Payment ServicesChennai Posted On 30 Jul 2025 End Date 13 Sep 2025 Required Experience 2 - 4 Years Basic Section Number Of Positions 1 Band B2 Band Name Lead Assistant Manager Cost Code D900140 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 2000000.0000 - 2400000.0000 Complexity Level Not Applicable Work Type Work From Office – Fully Working From EXL/ Client Offices Organisational Group Healthcare Sub Group Healthcare Organization Payment Services LOB Payment Services SBU Payment Operational Analytics Country India City Chennai Center IN Chennai C51 Skills Skill SQL PYTHON DATA ANALYSIS Minimum Qualification B.TECH/B.E M TECH Certification No data available Job Description 2-4 years of experience using Microsoft SQL Server (version 2008 or later). Ability to create and maintain complex T-SQL queries, views, and stored procedures . 0 -1+ year experience performing advanced ETL development including various dataflow transformation tasks. Ability to monitor the performance and improve the performance by optimizing the code and by creating indexes . Proficient with Microsoft Access and Microsoft Excel Knowledge of descriptive statistical modeling methodologies and techniques such as classification, regression, and association activities to support statistical analysis in various healthcare data. Strong knowledge of Data Warehousing concepts Strong written, verbal and Customer service skills Proficiency in compiling data, creating reports and presenting information , including expertise with query, MS Excel and / or other such product like SSRS, Tableau, PowerBI, etc Proficiency on various data forms including but not limited to star and snowflake schemas. Ability to translate business needs into practical applications Desire to work within a fast-paced environment Ability to work in a team environment and be flexible in taking on various projects Workflow Workflow Type L&S-DA-Consulting
Posted 6 days ago
4.0 years
0 Lacs
Coimbatore
On-site
Title of Position: Data Engineer Department: Business Reporting Center Job Summary: As a Data Engineer in our Business Reporting Centre, you will manage the data workflow from various ERP systems to our data warehouse. Your role includes overseeing ETL processes, maintaining data quality and integrity, and developing a secure data warehouse infrastructure. Key Responsibilities: Design and implement ETL processes for data extraction from ERP systems. Develop and maintain data warehouses, focusing on database design and maintenance. Ensure data quality, integrity, and security across all platforms. Implement data access controls and security measures. Work with BI Developers, providing data for Power BI reporting. Strong experience in BI reporting, especially with Power BI. Proficiency in data modelling and data visualisation. Knowledge of Python, Java, big data technologies, and cloud platforms. Familiarity with SharePoint and Microsoft Power Platform is advantageous. Stay updated with the latest trends in data engineering. Qualifications: Bachelor’s degree in Computer Science, Engineering, Information Technology, or related field. Demonstrated around 4 to 7 years of experience in data engineering and database management. Proficiency in SQL, Python or Java, and experience with relational databases. Experience with data warehousing solutions and data security. Strong written and spoken English; German is a plus. Excellent problem-solving skills and attention to detail. Effective communication and teamwork capabilities. Who we are: Mold-Masters is a global leader in the plastics industry. We design, manufacture, distribute, sell and service highly engineered and customized plastic processing equipment and systems. Our hot runners, temperature controllers, auxiliary injection and co-injection systems are utilized by customers of all sizes in every industry, from small local manufacturers to large worldwide OEM manufacturers of the most widely recognized brands. Over the course of our 50+ year history, we've built our reputation on delivering the best performance through our broad range of innovative technologies that optimize production to enhance molded part quality, increase productivity and lower part cost. Unlock your operations' full potential with Mold-Masters. Mold-Masters is an Operating Company of Hillenbrand. Hillenbrand (NYSE: HI) is a global industrial company that provides highly-engineered, mission-critical processing equipment and solutions to customers in over 100 countries around the world. Our portfolio is composed of leading industrial brands that serve large, attractive end markets, including durable plastics, food, and recycling. Guided by our Purpose — Shape What Matters For Tomorrow™ — we pursue excellence, collaboration, and innovation to consistently shape solutions that best serve our associates, customers, communities, and other stakeholders. To learn more, visit: www.Hillenbrand.com. EEO: The policy of Hillenbrand Inc. is to extend opportunities to qualified applicants and employees on an equal basis regardless of an individual's age, race, color, sex, religion, national origin, disability, sexual orientation, gender identity/expression or veteran status. Additionally, Hillenbrand Inc. and our operating companies are committed to being an Equal Employment Opportunity (EEO) Employer and offers opportunities to all job seekers including individuals with disabilities. If you need a reasonable accommodation to assist with your job search or application for employment, email us at recruitingaccommodations@hillenbrand.com . In your email, please include a description of the specific accommodation you are requesting as well as the job title and requisition number of the position for which you are applying. At Hillenbrand, everyone is welcome to apply and "Shape What Matters for Tomorrow".
Posted 6 days ago
3.0 - 5.0 years
3 - 8 Lacs
Chennai
On-site
3 - 5 Years 5 Openings Bangalore, Chennai, Kochi, Trivandrum Role description Role Proficiency: Independently develops error free code with high quality validation of applications guides other developers and assists Lead 1 – Software Engineering Outcomes: Understand and provide input to the application/feature/component designs; developing the same in accordance with user stories/requirements. Code debug test document and communicate product/component/features at development stages. Select appropriate technical options for development such as reusing improving or reconfiguration of existing components. Optimise efficiency cost and quality by identifying opportunities for automation/process improvements and agile delivery models Mentor Developer 1 – Software Engineering and Developer 2 – Software Engineering to effectively perform in their roles Identify the problem patterns and improve the technical design of the application/system Proactively identify issues/defects/flaws in module/requirement implementation Assists Lead 1 – Software Engineering on Technical design. Review activities and begin demonstrating Lead 1 capabilities in making technical decisions Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to schedule / timelines Adhere to SLAs where applicable Number of defects post delivery Number of non-compliance issues Reduction of reoccurrence of known defects Quick turnaround of production bugs Meet the defined productivity standards for project Number of reusable components created Completion of applicable technical/domain certifications Completion of all mandatory training requirements Outputs Expected: Code: Develop code independently for the above Configure: Implement and monitor configuration process Test: Create and review unit test cases scenarios and execution Domain relevance: Develop features and components with good understanding of the business problem being addressed for the client Manage Project: Manage module level activities Manage Defects: Perform defect RCA and mitigation Estimate: Estimate time effort resource dependence for one's own work and others' work including modules Document: Create documentation for own work as well as perform peer review of documentation of others' work Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Status Reporting: Report status of tasks assigned Comply with project related reporting standards/process Release: Execute release process Design: LLD for multiple components Mentoring: Mentor juniors on the team Set FAST goals and provide feedback to FAST goals of mentees Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Develop user interfaces business software components and embedded software components 5 Manage and guarantee high levels of cohesion and quality6 Use data models Estimate effort and resources required for developing / debugging features / components Perform and evaluate test in the customer or target environment Team Player Good written and verbal communication abilities Proactively ask for help and offer help Knowledge Examples: Appropriate software programs / modules Technical designing Programming languages DBMS Operating Systems and software platforms Integrated development environment (IDE) Agile methods Knowledge of customer domain and sub domain where problem is solved Additional Comments: Design, develop, and optimize large-scale data pipelines using Azure Databricks (Apache Spark). Build and maintain ETL/ELT workflows and batch/streaming data pipelines. Collaborate with data analysts, scientists, and business teams to support their data needs. Write efficient PySpark or Scala code for data transformations and performance tuning. Implement CI/CD pipelines for data workflows using Azure DevOps or similar tools. Monitor and troubleshoot data pipelines and jobs in production. Ensure data quality, governance, and security as per organizational standards. Skills Databricks,Adb,Etl About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 6 days ago
3.0 years
11 - 24 Lacs
Chennai
On-site
Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance. Contribute to development, maintenance, testing strategy, design discussions, and operations of the team. Participate in all aspects of agile software development including design, implementation, and deployment. Responsible for the end-to-end lifecycle of new product features / components. Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design. Work with a small, cross-functional team on products and features to drive growth. Learning new tools, languages, workflows, and philosophies to grow. Research and suggest new technologies for boosting the product. Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more. Qualifications Excellent team player with strong communication skills. B.Sc. in Computer Sciences or similar. 3-5 years of experience in Data Pipeline development. 3-5 years of experience in PySpark / Databricks. 3-5 years of experience in Python / Airflow. Knowledge of OOP and design patterns. Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking). Fluent with large scale SQL databases. Good problem-solving and analysis abilities. Requirements - Advantage Experience with Azure cloud services. Experience with Agile Development methodologies. Experience with Git. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 6 days ago
4.0 - 6.0 years
4 - 8 Lacs
Chennai
On-site
Senior Tableau Developer Job Summary: We are seeking a highly skilled and experienced Senior Tableau Developer to join our growing data visualization team. The ideal candidate will have a deep understanding of Tableau Desktop and Server, a strong analytical mindset, and the ability to create compelling and insightful dashboards and visualizations, including the effective presentation of statistical data. You will play a key role in transforming complex data into actionable insights for business stakeholders. Responsibilities: Design, develop, and maintain interactive Tableau dashboards and visualizations, incorporating appropriate statistical displays. Connect to various data sources, including databases, spreadsheets, and cloud platforms. Perform data modeling and transformation within Tableau to prepare data for statistical analysis. Create complex calculations, parameters, and calculated fields to enable dynamic statistical analysis. Develop and maintain Tableau Server workbooks and data sources, ensuring efficient data handling for statistical computations. Collaborate with business stakeholders to gather requirements and translate them into effective visualizations, including statistical charts and summaries. Select and implement appropriate statistical methods and visualizations (e.g., distributions, correlations, regressions, trend lines, box plots) within Tableau. Provide training and support to end-users on Tableau functionality, including statistical analysis features. Stay up-to-date with the latest Tableau features and best practices, particularly regarding statistical data handling and visualization. Contribute to the development of data visualization standards and guidelines, including best practices for statistical displays. Perform performance tuning and optimization of Tableau dashboards, especially those involving complex statistical calculations. Troubleshoot and resolve Tableau-related issues, including those related to statistical data and calculations. Mentor junior Tableau developers in best practices for statistical data visualization. Qualifications: Bachelor's degree in Computer Science, Information Systems, Statistics, or a related field. 4-6 years of experience developing Tableau dashboards and visualizations, with a focus on statistical data representation. Advanced proficiency in Tableau Desktop and Server, including statistical functions and chart types. Strong understanding of statistical concepts and methods, including descriptive statistics, hypothesis testing, and regression analysis. Strong understanding of data warehousing concepts and relational databases. Experience with data modeling and ETL processes. Excellent analytical and problem-solving skills. Ability to communicate complex technical and statistical concepts to non-technical audiences. Strong collaboration and communication skills. Experience with other data visualization tools (e.g., Power BI, Qlik Sense) is a plus. Tableau certification is preferred. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 6 days ago
15.0 years
0 Lacs
Coimbatore
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, contributing to the overall success of the projects you are involved in. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Engage in continuous learning to stay updated with the latest technologies and methodologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering. - Good To Have Skills: Experience with data warehousing solutions. - Strong understanding of ETL processes and data integration techniques. - Familiarity with cloud platforms and services related to data engineering. - Experience in working with databases and data modeling. Additional Information: - The candidate should have minimum 3 years of experience in Data Engineering. - This position is based at our Coimbatore office. - A 15 years full time education is required. 15 years full time education
Posted 6 days ago
0 years
5 - 6 Lacs
Noida
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant [ Database Developer ] In this role, y ou will collaborate closely with cross-functional teams, including developers, business analysts, and stakeholders, to deliver high-quality software solutions that enhance operational efficiency and support strategic business objectives . Responsibilities Design and implement scalable and secure database architectures using MSSQL and MySQL. Develop complex SQL queries, stored procedures, triggers, and views. Create and maintain database schemas that support business processes. Optimize database performance through indexing, query tuning, and server configuration. Monitor and troubleshoot database performance issues and bottlenecks. Implement and manage database security protocols, including access controls and encryption. Ensure data integrity, backup, and disaster recovery procedures are in place and tested. Qualifications we seek in you! Minimum Q ualifications BE/ B Tech/ MCA Preferred Q ualifications / Skills Proficient in writing and optimizing SQL queries. Strong understanding of relational database design principles. Experience with performance tuning and troubleshooting. Familiarity with version control systems (e.g., Git). Knowledge of backup, recovery, and high availability strategies. Familiarity with NoSQL databases like MongoDB is a plus. Experience with ETL tools and data warehousing concepts. Knowledge of scripting languages (e.g., Python, Shell) for automation. Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Noida Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 31, 2025, 5:54:34 AM Unposting Date Ongoing Master Skills List Consulting Job Category Full Time
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France