Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
5+ years of experience in data analysis, engineering, and science. Proficiency in Azure Data Factory, Azure DataBricks, Python, PySpark, SQL and PLSQL or SAS. Design, develop, and maintain ETL pipelines using Azure Data Bricks, Azure Data Factory, and other relevant technologies. Manage and optimize data storage solutions using Azure Data Lake Storage (ADLS). Develop and deploy data processing workflows using Pyspark and Python. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and ensure data quality. Implement data integration solutions and ensure seamless data flow across systems. Utilize Github for version control and collaboration on the codebase. Monitor and troubleshoot data pipelines to ensure data accuracy and availability. Stay updated with the latest industry trends and best practices in data engineering.
Posted 1 week ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Additional Comments: Senior Data Streaming Engineer Build, and maintain a real-time, file-based streaming data platform leveraging open-source technologies. The ideal candidate will have experience with Kubernetes (K8s), Apache Kafka, and Java multithreading, and will be responsible for: Developing a highly performant, scalable streaming architecture optimized for high throughput and low memory overhead Implementing auto-scaling solutions to support variable data loads efficiently Integrating reference data enrichment workflows using Snowflake Ensuring system reliability and real-time processing across distributed environments Collaborating with cross-functional teams to deliver robust, cloud-native data solutions Build scalable and optimized ETL/ELT workflows leveraging Azure Data Factory (ADF) and Apache Spark within Databricks. Skills Azure,KAFKA,JAVA,KUBERENETES Skills Azure,KAFKA,JAVA,KUBERENETES
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
This role is a part of the upcoming ValueMomentum Data Engineering Recruitment Drive on July 19th. As a Tech Lead-Modern Data, you will be responsible for leading data integration processes utilizing Informatica IICS. With 7-10 years of experience, you will be involved in designing, developing, and managing ETL/ELT processes. Your role will entail close collaboration with cross-functional teams to ensure that data solutions not only meet business needs but also align with industry best practices. Joining ValueMomentums Engineering Center means becoming part of a team of passionate engineers dedicated to addressing complex business challenges with innovative solutions. Our focus on transforming the P&C insurance value chain relies on a strong engineering foundation and a continuous refinement of processes, methodologies, tools, agile delivery teams, and core engineering archetypes. With expertise in Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise, we are committed to investing in your growth through our Infinity Program, empowering you to build your career with role-specific skill development using immersive learning platforms. As a Tech Lead, your responsibilities will include designing and implementing data integration processes using Informatica IICS, constructing mappings, tasks, task flows, schedules, and parameter files. You will ensure adherence to ETL/ELT best practices, create ETL mapping documentation, collaborate with stakeholders to understand data requirements, and implement solutions. Supporting activities such as ticket creation and resolution in Jira/ServiceNow, working in an Agile/DevOps environment, and ensuring timely delivery of solutions are key aspects of this role. To be successful in this position, you should have at least 7 years of experience in Informatica, with a minimum of 2 years in Informatica IICS. Strong experience in ETL tools and database designs, a good understanding of Agile methodologies, experience working in Onsite/Offshore models, as well as experience in the insurance or financial industry are preferred. Strong problem-solving and analytical skills, attention to detail in high-pressure situations, and excellent verbal and written communication skills are essential requirements. ValueMomentum is a leading solutions provider for the global property and casualty insurance industry. With a focus on helping insurers achieve sustained growth and high performance, the company enhances stakeholder value and fosters resilient societies. Having served over 100 insurers, ValueMomentum stands as one of the largest services providers exclusively dedicated to the insurance industry. At ValueMomentum, we offer a congenial environment for your professional growth, surrounded by experienced professionals. Some benefits available to you include a competitive compensation package, individual career development through coaching and mentoring programs, comprehensive training and certification programs, performance management tools like goal setting, continuous feedback, and year-end appraisal, as well as rewards and recognition for outstanding performers.,
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Location HYDERABAD OFFICE INDIA Job Description Are you looking to take your career to the next level? We’re looking for a Software Engineer to join our Data & Analytics Core Data Lake Platform engineering team. We are searching for self-motivated candidates, who will use modern Agile and DevOps practices to craft, develop, test and deploy IT systems and applications, delivering global projects in multinational teams. P&G Core Data Lake Platform is a central component of P&G data and analytics ecosystem. CDL Platform is used to deliver a broad scope of digital products and frameworks used by data engineers and business analysts. In this role you will have an opportunity to demonstrate data engineering abilities to deliver solutions enriching data cataloging and data discoverability for our users. With our approach to building solutions that would fit the scale P&G business is operating, we combine data engineering standard methodologies (Databricks) with modern software engineering standards (Azure, DevOps, SRE) to deliver value for P&G. Responsibilities Writing and testing code for Data & Analytics applications and building E2E cloud native (Azure) solutions. Engineering applications throughout its entire lifecycle from development, deployment, upgrade, and replacement/termination Ensuring that development and architecture implement to established standards, including modern software engineering practices (CICD, Agile, DevOps) Collaborate with internal technical specialists and vendors to develop final products to improve overall performance, efficiency and/or to enable adaptation of new business processes. Qualifications Job Qualifications Bachelor’s degree in computer science or related technical field. 4+ years of experience working as Software Engineer (with focus on developing in Python, PySpark, Databricks, ADF) Experience leveraging modern software engineering practices (code standards, Gitflow, automated testing, CICD, DevOps) Experience working with Cloud infrastructure (Azure preferred) Strong verbal, written, and interpersonal communication skills. A strong desire to produce high quality software through cross functional collaboration, testing, code reviews, and other best practices. You Also Should Have Strong written and verbal English communication skills to influence others Proven use of data and tools Ability to balance multiple priorities Ability to work collaboratively across different functions and geographies About Us We produce globally recognized brands and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders are able to lead with courage the vast array of brands, categories and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal opportunity employer and value diversity at our company. We do not discriminate against individuals on the basis of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. "At P&G, the hiring journey is personalized every step of the way, thereby ensuring equal opportunities for all, with a strong foundation of Ethics & Corporate Responsibility guiding everything we do. All the available job opportunities are posted either on our website - pgcareers.com, or on our official social media pages, for the convenience of prospective candidates, and do not require them to pay any kind of fees towards their application.” Job Schedule Full time Job Number R000134777 Job Segmentation Experienced Professionals (Job Segmentation)
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
About Us We are an innovative AI SaaS venture that develops cutting-edge AI solutions and provides expert consulting services. Our mission is to empower businesses with state-of-the-art AI technologies and data-driven insights. We're seeking a talented Data Engineer to join our team and help drive our product development and consulting initiatives. Job Overview For our Q4 2025 and 2026+ ambition, we are looking for a motivated Intern in Data Engineering (Azure). You will assist in building and maintaining foundational data pipelines and architectures under the guidance of senior team members. This role focuses on learning Azure tools (ADF, Databricks, Pyspark, Scala, python), supporting data ingestion/transformation workflows, and contributing to scalable solutions for AI-driven projects. Tasks Tasks Develop basic data pipelines using Azure Data Factory , Azure Synapse Analytics , or Azure Databricks . Assist in ingesting structured/semi-structured data from sources (e.g., APIs, databases, files) into Azure Data Lake Storage (ADLS) . Write simple SQL queries and scripts for data transformation and validation. Write simple Pyspark, scala and python code if required Monitor pipeline performance and troubleshoot basic issues. Collaborate with AI/ML teams to prepare datasets for model training. Document workflows and adhere to data governance standards. Requirements Preferred Qualifications Basic knowledge of AI/ML concepts. Bachelor in any stream mentioned in (Engineering, Science & Commerce). Basic understanding of Azure services (Data Factory, Synapse, ADLS, SQL Database, Databricks, Azure ML). Familiarity with SQL, Python, or Pyspark, Scala for scripting. Exposure to data modeling and ETL/ELT processes. Ability to work in Agile/Scrum teams Benefits What We Offer Cutting-edge Technology: Opportunity to work on cutting-edge AI projects and shape the future of data visualization Rapid Growth: Be part of a high-growth startup with endless opportunities for career advancement. Impactful Work: See your contributions make a real difference in how businesses operate. Collaborative Culture: Join a diverse team of brilliant minds from around the world. Flexible Work Environment: Enjoy remote work options and a healthy work-life balance. Competitive Compensation as per market. We’re excited to welcome passionate, driven individuals who are eager to learn and grow with our team. If you’re ready to gain hands-on experience, contribute to meaningful projects, and take the next step in your professional journey, we encourage you to apply. We look forward to exploring the possibility of having you onboard. Follow us for more updates: https://www.linkedin.com/company/ingeniusai/posts/
Posted 1 week ago
7.0 - 10.0 years
0 - 1 Lacs
Bengaluru
Remote
Job Title: Senior Data Engineer Contractual (Remote | 12 Months Project) Company: Covalensedigital Job Type: Contract (Short-term: 1 to 2 months) Location: Remote Experience: 7+ Years (3+ years in Databricks/Azure Data Engineering) Job Description: We are looking for an experienced Senior Data Engineer for a short-term remote project (12 months) to join Covalensedigital on a contractual basis . Key Responsibilities: Design and implement robust data pipelines using Azure Data Factory (ADF) and Databricks Work on data ingestion , transformation, cleansing, and aggregation from multiple sources Use Python, Spark, SQL for developing scalable data workflows Integrate pipelines with external APIs and systems Ensure data quality, accuracy, and adherence to standards Collaborate with data scientists, analysts, and engineering teams Monitor and troubleshoot data pipelines for smooth operation Must-Have Skills: Python / Spark / SQL / ADLS / Databricks / ADF / ETL 3+ years of hands-on experience in Azure Databricks Deep understanding of large-scale data architecture , data lakes , warehousing , and cloud/on-premise hybrid solutions Strong experience in data cleansing , Azure Data Explorer workflows Ability to work independently and deliver high-quality output within timelines Excellent communication skills Bonus: Experience in Insurance Domain projects Familiarity with data quality frameworks , data cataloging , and data profiling tools Contract Details: Duration: 1 to 2 months Type: Contractual (Remote) Start: Immediate How to Apply: Interested candidates, please send your resume to: kalaivanan.balasubamaniam@covalensedigital.com Thanks kalai 8015302990
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com . About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/MLOps/Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure/LLMOps/ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers, software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team. Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Company JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data. Founded in 2010, we are a team of 450+ consultants based in London, UK, and a team of 300+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base. We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains. Our team is as cutting edge as our work. We take pride for ourselves on being great to work with – no jargon or corporate-speak, flexible to change and receptive of feedback. We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down. About the Role 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high-quality delivery and adherence to standards. Support transition from project mode to Managed Services – including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. Qualifications ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two – Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). Required Skills Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones.
Posted 1 week ago
14.0 years
0 Lacs
Greater Hyderabad Area
On-site
Area(s) of responsibility Azure Data Architect (6B) Experience: 14 to 16 Years , Relevant experience of 5+ Years of experience as Azure Data Architect Azure Architect with 5+ years’ experience in architecting and designing the solutions in Azure Data Platform including PySpark. Knowledge & Understanding of Unity catalogue. Minimum Working Experience of 7+ years on Azure Data Platform Knowledge on Azure Cloud Security Knowledge on Data Mesh (Good to have experience) Good understanding of Microsoft Fabric and Co-pilot Responsible for designing and implementing secure, scalable, and highly available cloud-based solutions and estimation on Azure Cloud Experience with integration of different data sources with Data Warehouse and Data Lake is required. Experience in creating Data warehouse, data lakes for Reporting, AI and Machine Learning Understanding of data modelling and data architecture concepts To be able to clearly articulate pros and cons of various technologies and platforms Collaborate with clients to understand their business requirements and translate them into technical solutions that leverage Azure cloud platforms. Define and implement cloud governance and best practices. Identify and implement automation opportunities to increase operational efficiency. Conduct knowledge sharing and training sessions to educate clients and internal teams on cloud technologies. Good to have domain knowledge of Life science (any of them Healthcare, Pharma, Med-Tech, Med-Devices or Manufacturing) Mandatory Skillset: Azure Databricks, ADF, Architecture, PySpark, SQL Mandatory Certification on Azure Architect Location: Mumbai, Pune or Noida
Posted 1 week ago
8.0 years
0 Lacs
Kerala, India
On-site
Job Role Senior Dot Net Developer Experience 8+ years Notice period Immediate Location Trivandrum / Kochi Introduction Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery Integrate and support third-party APIs and external services Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) Participate in Agile/Scrum ceremonies and manage tasks using Jira Understand technical priorities, architectural dependencies, risks, and implementation challenges Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability Certifications : Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Developer Associate Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: C#, .NET Core 6/8+, Entity Framework / EF Core JavaScript, jQuery, REST APIs Expertise in MS SQL Server, including: Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types Skilled in unit testing with XUnit, MSTest Strong in software design patterns, system architecture, and scalable solution design Ability to lead and inspire teams through clear communication, technical mentorship, and ownership Strong problem-solving and debugging capabilities Ability to write reusable, testable, and efficient code Develop and maintain frameworks and shared libraries to support large-scale applications Excellent technical documentation, communication, and leadership skills Microservices and Service-Oriented Architecture (SOA) Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: Azure Functions Azure Durable Functions Azure Service Bus, Event Grid, Storage Queues Blob Storage, Azure Key Vault, SQL Azure Application Insights, Azure Monitoring Secondary Skills: Familiarity with AngularJS, ReactJS, and other front-end frameworks Experience with Azure API Management (APIM) Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) Experience with Azure Data Factory (ADF) and Logic Apps Exposure to Application Support and operational monitoring Azure DevOps - CI/CD pipelines (Classic / YAML)
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for candidates having a minimum of 2+ years’ experience in development/ support in Oracle PL/SQL, Unix Shell script, Oracle Forms, and ADF, with implementation/ support experience in minimum two of the following modules - RMS/ ReSA/ RPM/ Retek/MFCS/SIOCS The candidate should have at least 1 Oracle RMS implementation project experience The candidate should be able to gather requirements and develop high level technical design, and should be able to review those technical designs and codes Candidates having strong verbal, written, and interpersonal communication skills would be preferred A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge
Posted 1 week ago
5.0 years
15 - 25 Lacs
Gurugram, Haryana, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Skills: dwh,gcp,aws,snowflake,airflow,snowpipe,data analysis,sql,data architect,tableau,performence tuning,pipelines,oracle,etl,data modeling,azure,python,dbt,azkaban,power bi,fivetran,sigma computing,data warehousing,luigi,informatica
Posted 1 week ago
5.0 years
15 - 25 Lacs
Chennai, Tamil Nadu, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Skills: dwh,gcp,aws,snowflake,airflow,snowpipe,data analysis,sql,data architect,tableau,performence tuning,pipelines,oracle,etl,data modeling,azure,python,dbt,azkaban,power bi,fivetran,sigma computing,data warehousing,luigi,informatica
Posted 1 week ago
5.0 years
15 - 25 Lacs
Greater Kolkata Area
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Skills: dwh,gcp,aws,snowflake,airflow,snowpipe,data analysis,sql,data architect,tableau,performence tuning,pipelines,oracle,etl,data modeling,azure,python,dbt,azkaban,power bi,fivetran,sigma computing,data warehousing,luigi,informatica
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Cloud Architect with expertise in Azure and Snowflake, you will be responsible for designing and implementing secure, scalable, and highly available cloud-based solutions on AWS and Azure Cloud. Your role will involve utilizing your experience in Azure Databricks, ADF, Azure Synapse, PySpark, and Snowflake Services. Additionally, you will participate in pre-sales activities, including RFP and proposal writing. Your experience with integrating various data sources with Data Warehouse and Data Lake will be crucial for this role. You will also be expected to create Data warehouses and data lakes for Reporting, AI, and Machine Learning purposes, while having a solid understanding of data modelling and data architecture concepts. Collaboration with clients to comprehend their business requirements and translating them into technical solutions that leverage Snowflake and Azure cloud platforms will be a key aspect of your responsibilities. Furthermore, you will be required to clearly articulate the advantages and disadvantages of different technologies and platforms, as well as participate in Proposal and Capability presentations. Defining and implementing cloud governance and best practices, identifying and implementing automation opportunities for increased operational efficiency, and conducting knowledge sharing and training sessions to educate clients and internal teams on cloud technologies are additional duties associated with this role. Your expertise will play a vital role in ensuring the success of cloud projects and the satisfaction of clients.,
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Responsibilities: - Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL - Good Programming Logic Sense in SQL - Good PySpark knowledge for Azure Data Bricks - Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding - Good communication skill to express thoughts and interact with business users - Understanding of Data Security and Data Compliance - Agile Model Understanding - Project Documentation Understanding - Certification (Good to have) - Domain Knowledge Mandatory skill sets: Azure DE, ADB, ADF, ADL Experience required: 4 to 7 years Location: Ahmedabad
Posted 1 week ago
5.0 years
15 - 25 Lacs
Pune, Maharashtra, India
On-site
Exp: 5 - 12 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications Expertise in Snowflake for data warehousing and ELT processes. Strong proficiency in SQL for relational databases and writing complex queries. Experience with Informatica PowerCenter for data integration and ETL development. Experience using Power BI for data visualization and business intelligence reporting. Experience with Fivetran for automated ELT pipelines. Familiarity with Sigma Computing, Tableau, Oracle, and DBT. Strong data analysis, requirement gathering, and mapping skills. Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP Experience with workflow management tools such as Airflow, Azkaban, or Luigi. Proficiency in Python for data processing (other languages like Java, Scala are a plus). Skills: dwh,gcp,aws,snowflake,airflow,snowpipe,data analysis,sql,data architect,tableau,performence tuning,pipelines,oracle,etl,data modeling,azure,python,dbt,azkaban,power bi,fivetran,sigma computing,data warehousing,luigi,informatica
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client: Our Client is an Indian multinational technology company based in Bengaluru. It provides information technology, consulting and business process services, and is one of India's Big Six IT services companies. Services include cloud computing, computer security, digital transformation, artificial intelligence, robotics, data analytics, and other technologies. Job Title: Azure SQL Database Engineer Location: Pune Experience: 5+ years Job Type : Contract Notice Period: Immediate joiners Key Skills Primary Responsibilities Design, implement, and manage Azure SQL Database solutions to support business applications. Develop and optimize SQL Server stored procedures, functions, and indexes to enhance database performance. Create and manage ETL processes using Azure Data Factory (ADF) to ensure efficient data integration and transformation. Conduct performance tuning and index optimization to ensure efficient query execution and resource utilization. Implement data governance and data quality measures to maintain data accuracy and consistency. Oversee data management processes to ensure secure and compliant data storage and access. Secondary Responsibilities Collaborate with development teams to integrate .NET core Web API solutions with Azure SQL Database. Implement data streaming and caching solutions using technologies such as Kafka and Azure Redis. Work with other Azure services to ensure seamless integration and optimal performance. Qualifications and Experience A minimum of 8 years of experience in SQL Server, stored procedures, ADF/ETL, functions, performance tuning, index optimization, data governance, data quality, and data management. Proven experience with .NET core Web API, Azure, data streaming, caching, and Kafka. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Bachelor’s degree in Computer Science, Information Technology, or a related field.
Posted 1 week ago
0.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Location Chennai, Tamil Nadu, India Job ID R-231423 Date posted 22/07/2025 Job Title: Senior Consultant - Scrum Master Career Level: D3 Introduction to role: Are you ready to disrupt an industry and change lives? At AstraZeneca, our work directly impacts patients by redefining our ability to develop life-changing medicines. As a Senior Consultant, you'll be part of a hard-working team that empowers the business to perform at its peak, combining ground breaking science with leading digital technology platforms and data. We are seeking an experienced Scrum Master to lead ground-breaking IT projects, collaborating with diverse teams to deliver exceptional business value through agile methodologies. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise, where your expertise will drive scale and speed to deliver exponential growth. Accountabilities: Accountable for the delivery of Agile project sprints on time and on budget in accordance with AstraZeneca’s Adaptive Delivery Framework (ADF) and standards. Responsible for planning, leading, organizing, and motivating scrum teams to achieve high performance and quality in delivering user stories. Reacts to flexible work backlogs to meet changing needs and requirements. Manages day-to-day operational aspects of the scrum team and scope to ensure timely project completion. Manages relationships with Business Analysts/Product Owners to ensure business requirements are effectively understood and committed to project iterations/phases. Works with the Snr PM/RTE to lead scrum teams, communicating roles and responsibilities while encouraging teamwork and collaboration. Ensures project work products are complete, current, and stored appropriately. Promotes project improvement mentality across teams. Resolves quality and compliance risks and issues, advancing to the Snr PM/RTE when appropriate. Supports the project team in adhering to all standards including AstraZeneca Adaptive Delivery Framework, quality, compliance, processes, defined technical capabilities, and standard methodologies. Establishes key relationships across other Scrum Managers, PMs, Snr PMs/RTEs, cross-skilled scrum teams, business-facing representatives, and 3rd party supplier groups. Essential Skills/Experience: Expert in Agile methodology initiatives across global teams; certified Scrum Master with experience in SAFe 4.0 or equivalent. Operating in a similar role within a global business; 6-9 years in a scrum environment. Experience working closely with Business Analysts/Product Owners; leading cross-skilled personnel across Development, QA, Release Management fields. Comfortable reporting into the Snr Project Manager/Release Train Engineer. Desirable Skills/Experience: Detailed knowledge of Agile principles and practices, focusing on SAFe and Scrum. Strong consulting and facilitation skills in leading technical teams in Agile frameworks adoption. Experience working with onshore/offshore personnel within digital web delivery paradigm. Positive relationship building and interpersonal skills; excellent listening and communication skills. Experience working in a global organization across cultural boundaries. Knowledge of Agile techniques: User Stories, ATDD, TDD, Continuous Integration/Testing, Pairing, Automated Testing, Agile Games. Experience with multiple Scrum teams in various contexts; familiarity with Atlassian suite (Jira, Confluence). When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, innovation is at the heart of everything we do. We empower our employees to explore new solutions and experiment with leading-edge technology in a dynamic environment. With countless opportunities for learning and growth, you'll be part of a team that has the backing to innovate and disrupt an industry. Our diverse minds work inclusively together to make a meaningful impact by developing life-changing medicines. With investment behind us, there's no slowing us down as we strive to reach patients in need every day. Ready to make a big impact? Apply now and join us on this exciting journey! Date Posted 23-Jul-2025 Closing Date 24-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
5.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Analytics and Quality Engineer with 7 to 14 years of experience, you will play a crucial role in ensuring the quality of analytics products within our organization. Your responsibilities will include designing and documenting testing scenarios, creating test plans, and reviewing quality specifications and technical designs for both existing and new analytics products. You will collaborate closely with the Data & Analytics team to drive data quality programs and implement automated test frameworks within an agile team structure. Your expertise in QA processes, mentoring, ETL testing, data validation, data quality, and knowledge of RCM or US Healthcare will be essential in this role. Proficiency in programming languages such as SQL (T-SQL or PL/SQL) is a must, while knowledge of Python is a plus. Hands-on experience with tools like SSMS, Toad, BI tools (Tableau, Power BI), SSIS, ADF, and Snowflake will be beneficial. Familiarity with data testing tools like Great Expectations, Deequ, dbt, and Pytest for data scripts is desirable. Your educational background should include a Bachelor's degree in computer science, Information Technology, Data Science, Math, Finance, or a related field, along with a minimum of 5 years of experience as a quality assurance engineer or data analyst with a strong focus on data quality. Preferred qualifications include QA-related certifications and a strong understanding of US healthcare revenue cycle and billing. In this role, you will be responsible for test execution for healthcare analytics, creation of detailed test plans and test cases, and ensuring that production system defects are documented and resolved promptly. Your ability to design testing procedures, write testing scripts, and monitor testing results according to best practices will be crucial in ensuring that our analytics meet established quality standards. Your knowledge of test case management tools, Agile development tools, data quality frameworks, and automated testing tools will be valuable assets in this position. Additionally, your proficiency in SQL, ability to test data systems for performance and scalability, and strong analytical skills will contribute to the success of our analytics products. Strong communication skills, process improvement abilities, and time management skills are also essential for this role. If you are looking to join a growing and innovative organization where you can work with new technology in both manual and automation testing environments, this Senior Quality Assurance Engineer position is an ideal opportunity for you.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
Are you a skilled professional with experience in SQL, Python (Pandas & SQLAlchemy), and data engineering We have an exciting opportunity for an ETL Developer to join our team! As an ETL Developer, you will be responsible for working with MS SQL, Python, and various databases to extract, transform, and load data for insights and business goals. You should have a Bachelor's degree in Computer Science or a related field, or equivalent work experience. Additionally, you should have at least 5 years of experience working with MS SQL, 3 years of experience with Python (Pandas, SQLAlchemy), and 3 years of experience supporting on-call challenges. Key responsibilities include running SQL queries on multiple disparate databases, working with large datasets using Python and Pandas, tuning MS SQL queries, debugging data using Python and SQLAlchemy, collaborating in an agile environment, managing source control with GitLab and GitHub, creating and maintaining databases, interpreting complex data for insights, and familiarity with Azure, ADF, Spark, and Scala concepts. If you're passionate about data, possess a strong problem-solving mindset, and thrive in a collaborative environment, we encourage you to apply for this position. For more information or to apply, please send your resume to samdarshi.singh@mwidm.com or contact us at +91 62392 61536. Join us in this exciting opportunity to contribute to our data engineering team! #ETLDeveloper #DataEngineer #Python #SQL #Pandas #SQLAlchemy #Spark #Azure #Git #TechCareers #JobOpportunity #Agile #DataAnalysis #SQLTuning #OnCallSupport,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 professionals spanning across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant- Databricks Lead Developer. As a Databricks Developer in this role, you will be tasked with solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Keep abreast of new and emerging technologies and assess their potential application for service offerings and products. - Collaborate with architects and lead engineers to devise solutions that meet functional and non-functional requirements. - Demonstrate proficiency in understanding relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess experience in the Data Engineering domain. Qualifications we are looking for: Minimum qualifications: - Bachelor's Degree or equivalency in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - <<>> years of experience in IT. - Familiarity with new and emerging technologies and their possible applications for service offerings and products. - Collaboration with architects and lead engineers to develop solutions meeting functional and non-functional requirements. - Understanding of industry trends and standards. - Strong analytical and technical problem-solving abilities. - Proficiency in either Python or Scala, preferably Python. - Experience in the Data Engineering domain. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. If you are a proactive individual with a passion for innovation and a strong commitment to continuous learning and upskilling, we invite you to apply for this exciting opportunity to join our team at Genpact.,
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills - Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: We are conducting an in-person hiring drive for the position QA Engineer in Pune on 26h July 2025. Interview Location is mentioned below: Hyderabad: Gate 11, Argus Block-Sattva Knowledge City, 6th Floor Beside T-hub, Silpa Gram Craft Village, Madhapur Rai Durg, Hyderabad, 500081. We are looking QA Engineer (ETL testing, SSRS, SSIS,Data validation) Role: QA Engineer Location: Hyderabad Experience: 6- 10 Years Job Type: Full Time Employment Mandatory Mention 3 skills: ETL testing, SSRS, SSIS,Data validation What You'll Do: Perform data validation testing to ensure data consistency between on-premises SQLServer and cloud-based Azure and Snowflake environments. Develop and execute comprehensive testing plans to validate all data within the data warehouse. Leverage appropriate test automation tools to automate data validation processes, minimizing manual checks and file comparisons. Collaborate with data engineers, cloud architects, and other stakeholders to understand data migration requirements and ensure accurate data validation. Identify, document, and address data discrepancies and validation issues promptly. Maintain detailed documentation of testing processes, results, and corrective actions. Continuously improve testing methodologies and tools to enhance data validation efficiency and accuracy. Ensure compliance with data governance and security policies during the validation process. Expertise You'll Bring: Minimum of 6 years prior experience in data validation and quality assurance on data warehouse projects. Proven experience in data validation and quality assurance, particularly in data migration projects. Strong knowledge of SQLServer, Azure Cloud data tools (ADF, SHIR, Logic Apps, ADLS Gen2, Blob Storage), and Snowflake. Proficiency in test automation tools and frameworks (e.g., Selenium, TestNG, JUnit). Excellent analytical and problem-solving skills. Strong attention to detail and ability to identify data inconsistencies. Effective communication skills to collaborate with cross-functional teams. Ability to work independently and manage multiple tasks simultaneously Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview We are PepsiCo PepsiCo is one of the world's leading food and beverage companies with more than $79 Billion in Net Revenue and a global portfolio of diverse and beloved brands. We have a complementary food and beverage portfolio that includes 22 brands that each generate more than $1 Billion in annual retail sales. PepsiCo's products are sold in more than 200 countries and territories around the world. PepsiCo's strength is its people. We are over 250,000 game changers, mountain movers and history makers, located around the world, and united by a shared set of values and goals. We believe that acting ethically and responsibly is not only the right thing to do, but also the right thing to do for our business. At PepsiCo, we aim to deliver top-tier financial performance over the long term by integrating sustainability into our business strategy, leaving a positive imprint on society and the environment. We call this Winning with Pep+ Positive . For more information on PepsiCo and the opportunities it holds, visit www.pepsico.com. PepsiCo Data Analytics & AI Overview: With data deeply embedded in our DNA, PepsiCo Data, Analytics and AI (DA&AI) transforms data into consumer delight. We build and organize business-ready data that allows PepsiCo’s leaders to solve their problems with the highest degree of confidence. Our platform of data products and services ensures data is activated at scale. This enables new revenue streams, deeper partner relationships, new consumer experiences, and innovation across the enterprise. The Data Science Pillar in DA&AI will be the organization where Data Scientist and ML Engineers report to in the broader D+A Organization. Also DS will lead, facilitate and collaborate on the larger DS community in PepsiCo. DS will provide the talent for the development and support of DS component and its life cycle within DA&AI Products. And will support “pre-engagement” activities as requested and validated by the prioritization framework of DA&AI. Data Scientist-Gurugram and Hyderabad The role will work in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Machine Learning Services and Pipelines. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Collaborate with data engineers and ML engineers to understand data and models and leverage various advanced analytics capabilities Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Use big data technologies to help process data and build scaled data pipelines (batch to real time) Automate the end-to-end ML lifecycle with Azure Machine Learning and Azure/AWS/GCP Pipelines. Setup cloud alerts, monitors, dashboards, and logging and troubleshoot machine learning infrastructure Automate ML models deployments Qualifications Minimum 3years of hands-on work experience in data science / Machine learning Minimum 3year of SQL experience Experience in DevOps and Machine Learning (ML) with hands-on experience with one or more cloud service providers. BE/BS in Computer Science, Math, Physics, or other technical fields. Data Science - Hands on experience and strong knowledge of building machine learning models - supervised and unsupervised models Programming Skills - Hands-on experience in statistical programming languages like Python and database query languages like SQL Statistics - Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Any Cloud - Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Model deployment experience will be a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is required Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough