Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
19 - 22 Lacs
Pune
Work from Office
Job Description We are looking for an ambitious and highly skilled Go Developer who is passionate about building high-performance, scalable backend systems. This role is perfect for someone who thrives on solving complex engineering challenges, enjoys working with modern development practices, and takes ownership of delivering impactful solutions. You will be part of a dynamic team where innovation, collaboration, and continuous improvement are not just encouraged they are expected. If you were eager to make a meaningful contribution to real-world systems in a fast-paced environment. Skill / Qualifications Bachelor's degree in Computer Science, Engineering, or related technical field 5+ years of hands-on backend development experience Strong programming expertise in Golang Hands-on experience with MongoDB, OracleDB, and Snowflake Proficiency in using Logstash, Elasticsearch, and Splunk (Queries, Alerts, Dashboards) Experience in writing and maintaining scripts for automation and monitoring Familiarity with containerization and orchestration using Docker and Kubernetes Proficient in using Kafka for messaging and stream processing Comfortable working with GitLab for version control and CI/CD pipelines Experience handling incident alerts and escalations via PagerDuty Job Responsibilities Participate in daily stand-ups, code reviews, and sprint planning Review code and tickets to ensure high-quality development practices Design technical specifications for databases and APIs Plan and execute production deployments reliably and efficiently Provide Level 2 on-call support via PagerDuty for escalated incidents Collaborate with cross-functional teams including QA, DevOps, and product stakeholders Ensure effective incident response and root cause analysis for production issues Benefits Competitive Market Rate (Depending on Experience)
Posted 2 weeks ago
5.0 - 7.0 years
12 - 16 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are seeking a Senior Cloud Automation Engineer with strong experience in Python, AWS, Terraform, and automation frameworks. The ideal candidate will be responsible for building and integrating tools, utilities, and test automation processes across cloud and enterprise systems, including Salesforce, Dell Boomi, and Snowflake. Key Responsibilities: Design, develop, and maintain Python-based tools and services for automation and integration. Develop and manage infrastructure using Terraform and deploy resources on AWS. Automate internal backend processes including EDI document generation and Salesforce data cleanup. Integrate test automation frameworks with AWS services like Lambda, API Gateway, CloudWatch, and more. Implement and maintain automated test cases using Cucumber, Gherkin, and Postman. Collaborate with QA and DevOps teams to improve testing coverage and CI/CD automation. Work with tools such as Jira, X-Ray, and GitHub Actions for test tracking and version control. Develop utilities for integrations between Salesforce, Boomi, AWS, and Snowflake. Must-Have Qualifications: 5 to 7 years of hands-on experience in software development or test automation. Strong programming skills in Python. Solid experience working with AWS services (Lambda, API Gateway, CloudWatch, etc.). Proficiency with Terraform for managing infrastructure as code. Experience with REST API development and integration. Experience with Dell Boomi, Salesforce and SOQL. Knowledge of SQL (preferably with platforms like Snowflake). Knowledge of EDI formats and automation. Nice-to-Have Skills: Experience in BDD tools like Cucumber, Gherkin. Test management/reporting with X-Ray, integration with Jira. Exposure to version control and CI/CD workflows (e.g., GitHub, GitHub Actions). Tools & Technologies: Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, etc.) IaC: Terraform Automation/Testing: Cucumber, Gherkin, Postman Data & Integration: Snowflake, Salesforce, Dell Boomi DevOps: Git, GitHub Actions Tracking: Jira, X-Ray. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote.
Posted 2 weeks ago
3.0 - 6.0 years
10 - 17 Lacs
Bengaluru
Remote
Lifecycle Automation Specialist Experience: 3 - 5 Years Exp Salary : Upto AUD 30,000 / year Preferred Notice Period : Within 30 Days Shift : 3:30AM to 12:30PM IST Opportunity Type: Remote Placement Type: Contractual Contract Duration: Full-Time, Indefinite Period (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Braze, CSS, Hightouch, Intercom, Snowflake, HTML, JavaScript, SQL Good to have skills : Communication Compare Club (One of Uplers' Clients) is Looking for: Lifecycle Automation Specialist who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Compare Club is transforming into a data-driven organisation focused on delivering highly personalised marketing experiences. As a Lifecycle Automation Specialist, you will play a pivotal role in bringing this vision to life by supporting the development and implementation of automated marketing journeys across key customer touchpoints. This role ensures that automation systems work seamlessly in the backgroundmanaging data flow, maintaining data hygiene, launching campaigns on time, and ensuring messages reach the right members. Reporting to the Lifecycle Automations Manager, youll collaborate closely with the CRM, Member Experience, Tech, and Product teams. This opportunity is ideal for a technically minded individual looking to grow their career at the intersection of marketing, automation, and data. You'll gain hands-on experience with leading MarTech tools including Braze, Hightouch, Snowflake, and Intercomenabling smarter, faster, and more personalised customer journeys. Key Stakeholder Relationships Internal: Data & Analytics Product Team Sales Tech (Dev/IT) Business Development Member Experience Performance & Growth Brand & Content Compliance External: Platform Vendors Creative Agencies Outsourcing Partners Training Providers Key Responsibilities Lifecycle Automation Strategy & Implementation Support implementation of lifecycle marketing strategies using SQL and JavaScript-powered automations. Help maintain and improve automation workflows, progressively taking on greater responsibility. Translate strategic objectives into actionable marketing plans. Marketing Technology Support Develop basic JavaScript for use in automation platforms. Troubleshoot issues in marketing tech stack and work with IT/Dev teams on implementations. Data Analysis & Performance Optimisation Use SQL to analyse marketing and customer interaction data. Assist in maintaining data models and ETL processes. Support reporting and dashboard creation to track key metrics. Testing & Continuous Improvement Assist in A/B testing setup and analysis across various channels. Contribute to testing frameworks and continuous optimisation of campaigns. Communication & Stakeholder Management Support the rollout of new communication channels and initiatives. Maintain strong relationships with vendors and cross-functional teams. Act as a liaison between marketing and other departments to ensure alignment on capabilities and projects. Channel Management Assist with maintaining integrations across channels such as: Email: HTML/CSS development, basic JavaScript SMS Live Chat & Messengers Bots SDK Implementations: Push notifications, content cards Emerging Channels Code & Documentation Management Use version control systems (e.g., Git) to manage marketing automation code. Assist in maintaining technical documentation and knowledge base articles. Regulatory Compliance & Best Practices Ensure all marketing activities comply with relevant laws (e.g., GDPR, Spam Act). Apply secure coding practices and assist in audits to identify system vulnerabilities. Experience and Capabilities Professional Experience 3+ years in marketing operations, CRM, or automation execution roles. Experience in lifecycle marketing and multi-channel campaign execution. Understanding of email and SMS marketing best practices. Familiarity with A/B testing concepts. Exposure to project management methodologies. Technical Skills Experience with tools like Braze, Marketo, Salesforce Marketing Cloud, Adobe, or Klaviyo is valuable. Basic proficiency in HTML, CSS, and JavaScript (especially for email/web environments). Familiarity with SQL; willingness to grow expertise. Understanding of JSON, APIs, and webhooks. Willingness to learn version control tools like Git. Analytical & Problem-Solving Skills Foundational analytical skills with a data-driven mindset. Interest in segmentation, debugging, and workflow optimisation. Ability to communicate technical concepts clearly and effectively. Personal Attributes Quick learner and adaptable to evolving technologies. Self-motivated and proactive. Passionate about staying current with MarTech trends. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Theyre passionate about helping Aussies get more from their money. But their mission extends to providing a seamless, ongoing experience that is not only quick and fuss-free, but non-intrusive. It is one of Australia's largest comparison businesses, serving over 1,000,000 Australian families every year across Health Insurance, Life Insurance, Energy & Gas, Home Loans, Hearing Aids, and Child Care. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Bengaluru
Work from Office
Below is the job description. Exp 4-6 Years: Key Responsibilities: Act as a liaison between client teams, source system owners, and application users. Understand business requirements and translate them into technical data models and pipelines. Design and develop scalable ETL/ELT workflows using PySpark, SQL, and Python. Work extensively with Snowflake for data modeling, data warehousing, and query optimization. Integrate data from various AWS sources (e.g., S3, Glue) into Snowflake. Ensure data quality, reliability, and performance of data pipelines. Document processes and provide support for existing data systems. Required Skills: Strong hands-on experience with Snowflake (Data Modeling, Performance Tuning, Snowpipe, Streams & Tasks). Proficiency in SQL for complex querying and optimization. Good working knowledge of AWS services related to data engineering. Experience with PySpark and Python scripting for ETL development. Excellent communication skills for interacting with client stakeholders and teams.
Posted 2 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Bengaluru
Hybrid
National Instruments (NI, now known as Emerson Test & Measurement) Business Insights and Analytics team is seeking highly skilled and experienced Sr. Strategic Consultants to join our dynamic & growing team. The ideal candidates will have a solid background in data modeling, business analytics, and a passion for solving complex data problems. You will be responsible for designing, building, and maintaining scalable data pipelines, analytics solutions and visualization to support our business objectives. This is a high visibility role with close collaboration with leadership team in highly matrixed role. This role will directly support the companys Software Sales Business leadership teams, key initiatives and perform analytics around performance, customer and contact base, product portfolio, financials, and markets in support of NIs Business Units, as well as work with key functional areas including Sales, Marketing, Product Management, Finance, and R&D. In this Role, Your Responsibilities Will Be: Design, develop, and maintain scalable data pipelines and ETL processes with DBT and Power BI. Collaborate with analysts, data scientists, and other collaborators to understand data requirements and deliver high-quality data solutions. Implement data integration, transformation, and validation processes. Optimize and tune data pipelines for cost, performance, and reliability. Develop and maintain data models, schemas, and documentation. Ensure data quality, integrity, and security across all data systems. Monitor and fix data pipeline issues and implement solutions. Stay up to date with the latest industry trends and technologies in data engineering and analytics. Building effective partnerships with business collaborators and team members. Developing a deep understanding of business strategies, processes, and operations and the resulting data. Applying data and advanced analytics capabilities (including advanced modeling techniques) to provide innovative insights that solve key business problems and drive alignment between functions, collaborators, and information producers. Developing scalable data/reporting/analytics solutions. Extracting, transforming, and analyzing large sets of data. Gathering and effectively communicating insights. Driving positive business impact through insights. Who You Are: You are a leader who is self- driven, shows tremendous amount of initiative and gracefully handles ambiguity. You possess strong leadership and skills and have a consistent track record of developing and implementing strategies to achieve organizational objectives. For This Role, You Will Need: Bachelor's or master's degree in Computer Science, Engineering, or a related field. 5-7 years of experience in data engineering, analytics, or a related role. Strong experience with analytical SQL techniques. Experience building data products following data modeling concepts such as Kimball, Data Mesh, or Data Vault. Strong problem-solving skills and attention to detail. Good communication and collaboration skills. Preferred Qualifications that Set You Apart: Programming languages: Python, R Traditional database platforms: MySQL, SQL Server, Oracle Cloud platforms: AWS, Azure, Google Cloud Data warehousing and pipelines: Hive, Spark, Snowflake, DBT, Airflow Data ingestion and integration: Informatica, Fivetran, Mulesoft Data visualization and dashboarding: Power BI, Tableau CRM and ERP data: Salesforce, Oracle EBusiness Suite Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectivesbecause we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. Same Posting Description for Internal and External Candidates
Posted 2 weeks ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Snowflake Database Administrator (DBA) Summary: We are seeking a highly skilled and experienced Snowflake Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the administration, management, and optimization of our Snowflake data platform. The role requires strong expertise in database design, performance tuning, security, and data governance within the Snowflake environment. Key Responsibilities: Administer and manage Snowflake cloud data warehouse environments, including provisioning, configuration, monitoring, and maintenance. Implement security policies, compliance, and access controls. Manage Snowflake accounts and databases in a multi-tenant environment. Monitor the systems and provide proactive solutions to ensure high availability and reliability. Monitor and manage Snowflake costs. Collaborate with developers, support engineers and business stakeholders to ensure efficient data integration. Automate database management tasks and procedures to improve operational efficiency. Stay up to date with the latest Snowflake features, best practices, and industry trends to enhance the overall data architecture. Develop and maintain documentation, including database configurations, processes, and standard operating Support disaster recovery and business continuity planning for Snowflake environments. Required Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 5+ years of experience in Snowflake operations and administration. Strong knowledge of SQL, query optimization, and performance tuning techniques. Experience in managing security, access controls, and data governance in Snowflake. Familiarity with AWS. Proficiency in Python or Bash. Experience in automating database tasks using Terraform, CloudFormation, or similar tools. Understanding of data modeling concepts and experience working with structured and semi-structured data (JSON, Avro, Parquet). Strong analytical, problem-solving, and troubleshooting skills. Excellent communication and collaboration abilities. Preferred Qualifications: Snowflake certification (e.g., SnowPro Core, SnowPro Advanced: Architect, Administrator). Experience with CI/CD pipelines and DevOps practices for database management. Knowledge of machine learning and analytics workflows within Snowflake. Hands-on experience with data streaming technologies (Kafka, AWS Kinesis, etc.).
Posted 2 weeks ago
4.0 - 8.0 years
15 - 25 Lacs
Pune, Mumbai (All Areas)
Hybrid
DATA ENGINEER - Immediate Joiners Job Description: 4+ years of experience in data modeling and building production-ready data pipelines and transforming data into actionable datasets. Hands on experience in SQL and Python are mandatory. Experience in Spark, AWS Glue, EMR, Airflow, Python , and MPP databases like Snowflake, Redshift, or Teradata . Proven ability to collaborate with onshore teams and stakeholders. Strong understanding of metrics for data pipelines and experience building visibility solutions for partner teams. Passion for driving insights through governed and scalable data solutions.
Posted 2 weeks ago
5.0 - 7.0 years
12 - 16 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
We are seeking a Senior Python Developer with strong experience in AWS, Terraform, and automation frameworks. The ideal candidate will be responsible for building and integrating tools, utilities, and test automation processes across cloud and enterprise systems, including Salesforce, Dell Boomi, and Snowflake. Key Responsibilities: Design, develop, and maintain Python-based tools and services for automation and integration. Develop and manage infrastructure using Terraform and deploy resources on AWS. Automate internal backend processes including EDI document generation and Salesforce data cleanup. Integrate test automation frameworks with AWS services like Lambda, API Gateway, CloudWatch, and more. Implement and maintain automated test cases using Cucumber, Gherkin, and Postman. Collaborate with QA and DevOps teams to improve testing coverage and CI/CD automation. Work with tools such as Jira, X-Ray, and GitHub Actions for test tracking and version control. Develop utilities for integrations between Salesforce, Boomi, AWS, and Snowflake. Must-Have Qualifications: 5 to 7 years of hands-on experience in software development or test automation. Strong programming skills in Python. Solid experience working with AWS services (Lambda, API Gateway, CloudWatch, etc.). Proficiency with Terraform for managing infrastructure as code. Experience with REST API development and integration. Experience with Dell Boomi, Salesforce and SOQL. Knowledge of SQL (preferably with platforms like Snowflake). Knowledge of EDI formats and automation. Nice-to-Have Skills: Experience in BDD tools like Cucumber, Gherkin. Test management/reporting with X-Ray, integration with Jira. Exposure to version control and CI/CD workflows (e.g., GitHub, GitHub Actions). Tools & Technologies: Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, etc.) IaC: Terraform Automation/Testing: Cucumber, Gherkin, Postman Data & Integration: Snowflake, Salesforce, Dell Boomi DevOps: Git, GitHub Actions Tracking: Jira, X-Ray Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 weeks ago
5.0 - 7.0 years
6 - 10 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
More than 5+ years of experience in data modelling – designing, implementing, and maintaining data models to support data quality, performance and scalability. Proven experience as a Data Modeler and worked with data analysts, data architects and business stakeholders to ensure data models are aligned to business requirements. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ) Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail. Must Have: Data Modeling, Data Modeling Tool experience, SQL Nice to Have: SAP HANA, Data Warehouse, Databricks, CPG"
Posted 2 weeks ago
8.0 - 12.0 years
35 - 40 Lacs
Mumbai
Work from Office
JOBOVERVIEW: Part of Business Focused IT, candidate needs would be in charge of scaling up and managing an Enterprise-Wide Data Platform that would support the analytical needs of the complete Pharma business(extensible to other businesses as required). The platform should be flexible to support business operations of the future and provide story telling type of intuitive analytics. This position would be a part of the Analytics Center of Excellence. Essential Skills & Experience: BS/MS degree in computer science, mathematics, or equivalent relevant degree with 8+ years in Analytics, BI and Data Warehousing. Experience leading in a highly cross-functional environment, collaborating closely with IT, Finance & Engineering. Hands-on experience in architecting and building scalable data platform, ETL processes, distributed systems for data processing, data migration and quality. Strong familiarity and working knowledge with cloud platforms like AWS and Snowflake. Experience in building compelling data visualizations using business intelligence and data visualization tools like Tableau, BI and Qlik. Ability to develop & execute data strategy in collaboration with business leads. Excellent problem-solving, with ability to translate complex data into business recommendations for business stakeholders Excellent communication skills, with ability to explain complex and abstract technological concepts to business stakeholders. Proficiency in SQL for extraction, aggregating and processing large volume of structured/unstructured data. Experience in advanced query optimization techniques. Proficiency in data acquisition and data preparation by pulling data from various sources. Self-driven and ability to learn new, unfamiliar tools and deliver on ambiguous projects with incomplete data. Experience reviewing and providing feedback on architecture and code review. KEY ROLES/RESPONSIBILITIES: Responsible for developing and maintaining the global data marketplace (data lake) Manages the sourcing and acquisition of internal (including IT and OT) & external data sets Ensure adherence of data to both enterprise business rules, and, especially, to legal and regulatory requirements Define the data quality standards for cross functional data that is used in BI/analytics models/reports Provide input into data integration standards and the enterprise data architecture Responsible for modelling and designing the application data structure, storage and integration and leading the database analysis, design and build effort Review the database deliverables throughout development thereby ensuring quality and traceability to requirements and adherence to all quality management plans and standards Develop strategies for data acquisitions, dissemination and archival Manage the data architecture within the big data solution such as Hadoop, Cloudera, etc.. Responsible for modelling and designing the big data structure, storage, integration and leading the database analysis, design, visualization and build effort Review the database deliverables throughout development thereby ensuring quality and traceability to requirements and adherence to all quality management plans and standards Work with partners and vendors (in a multi-vendor environment) for various capabilities Continuously review the analytics stack for improvements performance improvements, reduce overall TCO through cost optimizations & better the predictive capabilities Bring in thought leadership with regards to analytics to make Piramal Pharma an analytics driven business; and help in driving business KPIs Preparation of Analytics Platform budgets for both CAPEX and OPEX for assigned initiatives and rolling out the initiatives within budget & projected timelines Drive MDM Strategy and Implementation initiative Responsible for overall delivery and customer satisfaction for Business services, interaction with business leads, project status management and reporting, implementation management, identifying further opportunities for automation within PPL Ensure IT compliance in all project rollouts as per regulatory guidelines Conduct Change Management and Impact Analysis for approved enhancements. Uphold data integrity requirements following ALCOA+ guidelines. Monitor SLAs and KPIs as agreed upon by the business, offering root-cause analysis and risk mitigation action plans when needed. Drive awareness & learning across Piramal Pharma in Enterprise Data Platform
Posted 2 weeks ago
6.0 - 9.0 years
27 - 42 Lacs
Bengaluru
Work from Office
About the role As a SDET Data Test Automation Engineer, you will make an impact by ensuring the integrity of data transformations on our data platform which works with large datasets and to enhance our testing capabilities with UI and API automation. You will be a valued member of the AI Analytics group and work collaboratively with development and product teams to deliver robust scalable and high-performing data test tools. In this role, you will: Design develop and maintain automated test scripts for data validation transformation API and UI testing Conduct data testing using Python frameworks like Pandas, PySpark & Pytest Analyze test results identify issues and work on resolutions Ensure that automated tests are integrated into the CICD pipeline Work on data transformation processes including validating rule-based column replacements and exception handling Collaborate with development and product teams to identify test requirements and strategies UI and API test automation using Playwright and Requests Validate large datasets to ensure data integrity and performance Work model We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 3 days a week in a client or respective location in Cognizant office. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs. What you must have to be considered Strong programming skills in Python Proficiency in data manipulation and testing libraries like Pandas, PySpark & Pytest Hands on experience with AWS services like S3 Lambda Familiarity with SQL for data transformation tasks These will help you stand out Strong experience with API and UI test automation tools and libraries Excellent problem-solving and analytical skills & strong communication and teamwork abilities Familiarity with CICD pipelines and tools like Jenkins, Docker & Kubernetes Good to have experience with databases like Trino iceberg, Snowflake and Postgres Good to have experience on Polars Fugue Experience with cloud tools like Kubernetes EKS, AWS Glue & ECS Certification : AWS, Python We're excited to meet people who share our mission and can make an impact in a variety of ways. Don't hesitate to apply, even if you only meet the minimum requirements listed. Think about your transferable experiences and unique skills that make you stand out as someone who can bring new and exciting things to this role.
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Job Summary: We are seeking an experienced Informatica Developer with a strong background in data integration, cloud data platforms, and modern ETL tools. The ideal candidate will have hands-on expertise in Informatica Intelligent Cloud Services (IICS/CDI/IDMC) , Snowflake , and cloud storage platforms such as AWS S3. You will be responsible for building scalable data pipelines, designing integration solutions, and resolving complex data issues across cloud and on-premises environments. Key Responsibilities: Design, develop, and maintain robust data integration pipelines using Informatica PowerCenter and Informatica CDI/IDMC . Create and optimize mappings and workflows to load data into Snowflake , ensuring performance and accuracy. Develop and manage shell scripts to automate data processing and integration workflows. Implement data exchange processes between Snowflake and external systems, including AWS S3 . Write complex SQL and SnowSQL queries for data validation, transformation, and reporting. Collaborate with business and technical teams to gather requirements and deliver integration solutions. Troubleshoot and resolve performance, data quality, and integration issues in a timely manner. Work on integrations with third-party applications like Salesforce and NetSuite (preferred). Required Skills and Qualifications: 5+ years of hands-on experience in Informatica PowerCenter and Informatica CDI / IDMC . Minimum 3 - 4 years of experience with Snowflake database and SnowSQL commands. Strong SQL development skills. Solid experience with AWS S3 and understanding of cloud data integration architecture. Proficiency in Unix/Linux Shell Scripting . Ability to independently design and implement end-to-end ETL workflows. Strong problem-solving skills and attention to detail. Experience working in Agile/Scrum environments. Preferred Qualifications (Nice to Have): Experience integrating with Salesforce and/or NetSuite using Informatica. Knowledge of cloud platforms like AWS , Azure , or GCP . Informatica certification(s) or Snowflake certifications.
Posted 2 weeks ago
6.0 - 9.0 years
15 - 17 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled and innovative AI Data Engineer to join our Development team. In this role, you will design, develop, and deploy AI systems that can generate content, reason autonomously, and act as intelligent agents in dynamic environments. Key Responsibilities • Design and implement generative AI models (e.g., LLMs, diffusion models) f or text, image, audio, or multimodal content generation. • Develop agentic AI systems capable of autonomous decision-making, planning, and tool use in complex environments. • Integrate AI agents with APIs, databases, and external tools to enable real-world task execution. • Fine-tune foundation models for domain-specific applications using techniques like RLHF, prompt engineering, and retrieval-augmented generation (RAG). • Collaborate with cross-functional teams including product, design, and engineering to bring AI-powered features to production. • Conduct research and stay up to date with the latest advancements in generative and agentic AI. • Ensure ethical, safe, and responsible AI development practices. Required Qualifications • Bachelors or Masters degree in Computer Science, AI, Machine Learning, or a related field. • 3+ years of experience in machine learning, with a focus on generative models or autonomous agents. • Proficiency in Python and ML frameworks such as PyTorch • Experience with LLMs (e.g., GPT, Claude, LLaMA, Cortex) , transformers, and diffusion models. • Familiarity with agent frameworks (e.g., LangChain, AutoGPT, ReAct, OpenAgents ). • Experience with AWS and Snowflake services • Prior We are seeking a highly skilled and innovative AI Data Engineer to join our Development team. In this role, you will design, develop, and deploy AI systems that can generate content, reason autonomously, and act as intelligent agents in dynamic environments.
Posted 2 weeks ago
6.0 - 9.0 years
15 - 17 Lacs
Gurugram
Work from Office
Job description Role - AI/ ML Engineer Location - Gurgaon Exp - 6 to 9 yrs Shift Timings - 12 PM to 9 PM Job Description We are seeking a highly skilled and innovative AI Data Engineer to join our Development team. In this role, you will design, develop, and deploy AI systems that can generate content, reason autonomously, and act as intelligent agents in dynamic environments. Key Responsibilities • Design and implement generative AI models (e.g., LLMs, diffusion models) for text, image, audio, or multimodal content generation. • Develop agentic AI systems capable of autonomous decision-making, planning, and tool use in complex environments. • Integrate AI agents with APIs, databases, and external tools to enable real-world task execution. • Fine-tune foundation models for domain-specific applications using techniques like RLHF, prompt engineering, and retrieval-augmented generation (RAG). • Collaborate with cross-functional teams, including product, design, and engineering, to bring AI-powered features to production. • Conduct research and stay up to date with the latest advancements in generative and agentic AI. • Ensure ethical, safe, and responsible AI development practices. Required Qualifications • Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field. • 3+ years of experience in machine learning, with a focus on generative models or autonomous agents. • Proficiency in Python and ML frameworks such as PyTorch • Experience with LLMs (e.g., GPT, Claude, LLaMA, Cortex), transformers, and diffusion models. • Familiarity with agent frameworks (e.g., LangChain, AutoGPT, ReAct, OpenAgents). • Experience with AWS and Snowflake services • Prior Healthcare experience • Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems. • Excellent problem-solving and communication skills.
Posted 2 weeks ago
3.0 - 6.0 years
12 - 15 Lacs
Bengaluru
Work from Office
We are seeking a highly skilled and innovative AI Data Engineer to join our Development team. In this role, you will design, develop, and deploy AI systems that can generate content, reason autonomously, and act as intelligent agents in dynamic environments. Key Responsibilities • Design and implement generative AI models (e.g., LLMs, diffusion models) for text, image, audio, or multimodal content generation. • Develop agentic AI systems capable of autonomous decision-making, planning, and tool use in complex environments. • Integrate AI agents with APIs, databases, and external tools to enable real-world task execution. • Fine-tune foundation models for domain-specific applications using techniques like RLHF, prompt engineering, and retrieval-augmented generation (RAG). • Collaborate with cross-functional teams including product, design, and engineering to bring AI-powered features to production. • Conduct research and stay up to date with the latest advancements in generative and agentic AI. • Ensure ethical, safe, and responsible AI development practices. Required Qualifications • Bachelors or Masters degree in Computer Science, AI, Machine Learning, or a related field. • 3+ years of experience in machine learning, with a focus on generative models or autonomous agents. • Proficiency in Python and ML frameworks such as PyTorch • Experience with LLMs (e.g., GPT, Claude, LLaMA, Cortex), transformers, and diffusion models. • Familiarity with agent frameworks (e.g., LangChain, AutoGPT, ReAct, OpenAgents). • Experience with AWS and Snowflake services • Prior Healthcare experience • Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems. • Excellent problem-solving and communication skills. Shift Timings - 12 PM - 9 PM Office location -Bangalore/ Gurgaon ( Carelon offc locations )
Posted 2 weeks ago
3.0 - 6.0 years
12 - 15 Lacs
Gurugram
Work from Office
Role - AI/ ML Engineer Location - Gurgaon Shift Timings - 12 PM to 9 PM Job Description We are seeking a highly skilled and innovative AI Data Engineer to join our Development team. In this role, you will design, develop, and deploy AI systems that can generate content, reason autonomously, and act as intelligent agents in dynamic environments. Key Responsibilities • Design and implement generative AI models (e.g., LLMs, diffusion models) for text, image, audio, or multimodal content generation. • Develop agentic AI systems capable of autonomous decision-making, planning, and tool use in complex environments. • Integrate AI agents with APIs, databases, and external tools to enable real-world task execution. • Fine-tune foundation models for domain-specific applications using techniques like RLHF, prompt engineering, and retrieval-augmented generation (RAG). • Collaborate with cross-functional teams, including product, design, and engineering, to bring AI-powered features to production. • Conduct research and stay up to date with the latest advancements in generative and agentic AI. • Ensure ethical, safe, and responsible AI development practices. Required Qualifications • Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field. • 3+ years of experience in machine learning, with a focus on generative models or autonomous agents. • Proficiency in Python and ML frameworks such as PyTorch • Experience with LLMs (e.g., GPT, Claude, LLaMA, Cortex), transformers, and diffusion models. • Familiarity with agent frameworks (e.g., LangChain, AutoGPT, ReAct, OpenAgents). • Experience with AWS and Snowflake services • Prior Healthcare experience • Strong understanding of reinforcement learning, planning algorithms, and multi-agent systems. • Excellent problem-solving and communication skills.
Posted 2 weeks ago
5.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Manage and support the Delivery Operations Team by implementing and supporting ETL and automation procedures. Schedule and perform delivery operations functions to complete tasks and ensure client satisfaction. ESSENTIAL FUNCTIONS: Process data conversions on multiple platforms Perform address standardization, merge purge, database updates, client mailings, postal presort. Automate scripts to perform tasks to transfer and manipulate data feeds internal and external. Multitask ability to manage multiple Jobs to ensure timely client deliverability Work with technical staff to maintain and support an ETL environment. Work in a team environment with database/crm, modelers, analysts and application programmers to deliver results for clients. REQUIRED SKILLS: Experience in database marketing with the ability to transform and manipulate data. Experience with Oracle and SQL to automate scripts to process and manipulate marketing data. Experience with tools such as DMexpress, Talend, Snowflake, Sap DQM suite of tools, excel. Experience with Sql Server : Data exports and imports, ability to run Sql server Agent Jobs and SSIS packages. Experience with editors like Notepad++, Ultraedit, or any type of editor. Experience in SFTP and PGP to ensure data security and protection of client data. Experience working with large scale customer databases in a relational database environment. Proven ability to work on multiple tasks at a given time. Ability to communicate and work in a team environment to ensure tasks are completed in a timely manner MINIMUM QUALIFICATIONS: Bachelor's degree or equivalent 5+ years experience in Database Marketing. Excellent oral and written communication skills required.
Posted 2 weeks ago
9.0 - 14.0 years
35 - 65 Lacs
Bengaluru
Hybrid
Software ARCHITECT - PRODUCT MNC - Urgent Hiring!!! Are you passionate about crafting scalable, cloud-native architectures? Do you thrive at the intersection of AI, cloud, and data platforms? Join us at Integra Connect, where we are transforming healthcare technology through intelligent solutions and innovation. LOCATION : BANGLORE - HYBRID What You'll Do: Architect end-to-end systems using Azure, Snowflake, Python, and .NET Lead AI/ML architecture and drive responsible AI practices Design scalable data platforms & ETL pipelines Mentor cross-functional engineering teams Set best practices for performance, security, and cloud optimization What Were Looking For: 8+ years in software development, 3+ years in architecture Deep expertise in Azure DevOps, AKS, Snowflake, AI/ML, and .NET/C# Experience in cloud-native architectures and healthcare systems is a plus Strong leadership, strategic thinking, and problem-solving skills At Integra Connect, you’ll be part of a team enabling specialty healthcare practices to thrive in a value-based care world — leveraging modern tech to make real impact. Hybrid | Competitive Benefits | Growth-Focused Culture Ready to architect the future of healthcare? Apply now or DM me for more info!
Posted 2 weeks ago
5.0 - 7.0 years
12 - 18 Lacs
Pune
Work from Office
Proven experience as a Data Engineer Must have strong knowledge in T-SQL. Should have Azure ADF background, Strong expertise in Snowflakes, Data factory, Proficiency in Azure SQL and experience with data modeling Experience with ETL tools Health insurance Flexi working
Posted 2 weeks ago
4.0 - 6.0 years
18 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Work Mode: Hybrid Interview Mode: Virtual (2 Rounds) Key Skills & Responsibilities Strong hands-on experience with Snowflake database design, coding, and documentation. Expertise in performance tuning for both Oracle and Snowflake environments. Prior experience as an Apps DBA with the ability to coordinate with application teams. Proficient in using OEM, Tuning Advisor, and analyzing AWR reports. Ability to understand and optimize SQL code, and provide guidance to application teams. Efficient in managing compute and storage resources within Snowflake architecture. Perform Snowflake administrative tasks, manage multiple accounts, and implement platform best practices. Implement data governance practices including column-level security, dynamic data masking, and Role-Based Access Control (RBAC). Utilize features like Time Travel, Cloning, and replication for agile development and data recovery. Execute DML and DDL operations, and manage concurrency models effectively. Enable secure data sharing within and outside the organization.
Posted 2 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
Chennai
Work from Office
Core requirements - Solid SQL language skills Basic knowledge of data modeling Working knowledge with Snowflake in Azure, CI/CD process (with any tooling) Nice to have - Azure ADF ETL/ELT frameworks ER/Studio Really nice to have - Healthcare / life sciences experience GxP processes Sr DW Engineer (in addition to the above) Overseeing engineers while also performing the same work himself/herself Conducting design reviews, code reviews, and deployment reviews with engineers Solid data modeling, preferably using ER/Studio (or equivalent tool is fine) Solid Snowflake SQL optimization (recognizing and fixing poor-performing statements) Familiarity with medallion architecture (raw, refined, published or similar terminology) date profile
Posted 2 weeks ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 2 weeks ago
7.0 - 12.0 years
9 - 15 Lacs
Bengaluru
Work from Office
We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django .
Posted 2 weeks ago
10.0 - 14.0 years
30 - 45 Lacs
Noida, Hyderabad, Gurugram
Work from Office
Description Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. As aSenior Data Engineerat Optum youll help us work on streamlining the flow of information and deliver insights to manage our various Data Analytics web applications which serve internal and external customers. This specific team is working on features such as OpenAI API integrations, working with customers to integrate disparate data sources into useable datasets, and configuring databases for our web application needs. Your work will contribute to lowering the overall cost of healthcare for our consumers and helping people live healthier lives. Primary Responsibilities: Data Pipeline Development: Develop and maintain data pipelines that extract, transform, and load (ETL) data from various sources into a centralized data storage system, such as a data warehouse or data lake. Ensure the smooth flow of data from source systems to destination systems while adhering to data quality and integrity standards Data Integration: Integrate data from multiple sources and systems, including databases, APIs, log files, streaming platforms, and external data providers. Handle data ingestion, transformation, and consolidation to create a unified and reliable data foundation for analysis and reporting Data Transformation and Processing: Develop data transformation routines to clean, normalize, and aggregate data. Apply data processing techniques to handle complex data structures, handle missing or inconsistent data, and prepare the data for analysis, reporting, or machine learning tasks Maintain and enhance existing application databases to support our many Data Analytic web applications, as well as working with our web developers on new requirements and applications Contribute to common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines Implement data governance in line with company standards Partner with Data Analytics and Product leaders to design best practices and standards for developing productional analytic pipelines Partner with Infrastructure leaders on architecture approaches to advance the data and analytics platform, including exploring new tools and techniques that leverage the cloud environment (Azure, Snowflake, others) Monitoring and Support: Monitor data pipelines and data systems to detect and resolve issues promptly. Develop monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so You will be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role, as well as providing development for other roles you may be interested in. Qualifications Required Qualifications: Extensive hands-on experience in developing data pipelines that demonstrate a solid understanding of software engineering principles Proficiency in Python, in fulfilling multiple general-purpose use-cases, and not limited to developing data APIs and pipelines Solid understanding of software engineering principles (micro-services applications and ecosystems) Fluent in SQL (Snowflake/SQL Server), with experience using Window functions and more advanced features Understanding of DevOps tools, Git workflow and building CI/CD pipelines Solid understanding of Airflow Proficiency in design and implementation of pipelines and stored procedures in SQL Server and Snowflake Demonstrated ability to work with business and technical audiences on business requirement meetings, technical white boarding exercises, and SQL coding or debugging sessions Preferred Qualifications: Bachelor’s Degreeor higherinDatabase Management, Information Technology, Computer Science or similar Experience with Azure Data Factory or Apache Airflow Experience with Azure Databricks or Snowflake Experience working in projects with agile/scrum methodologies Experience with shell scripting languages Experience working with Apache Kafka, building appropriate producer or consumer apps Experience with production quality ML and/or AI model development and deployment Experience working with Kubernetes and Docker, and knowledgeable about cloud infrastructure automation and management (e.g., Terraform) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Snowflake has become one of the most sought-after skills in the tech industry, with a growing demand for professionals who are proficient in handling data warehousing and analytics using this cloud-based platform. In India, the job market for Snowflake roles is flourishing, offering numerous opportunities for job seekers with the right skill set.
These cities are known for their thriving tech industries and have a high demand for Snowflake professionals.
The average salary range for Snowflake professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
A typical career path in Snowflake may include roles such as: - Junior Snowflake Developer - Snowflake Developer - Senior Snowflake Developer - Snowflake Architect - Snowflake Consultant - Snowflake Administrator
In addition to expertise in Snowflake, professionals in this field are often expected to have knowledge in: - SQL - Data warehousing concepts - ETL tools - Cloud platforms (AWS, Azure, GCP) - Database management
As you explore opportunities in the Snowflake job market in India, remember to showcase your expertise in handling data analytics and warehousing using this powerful platform. Prepare thoroughly for interviews, demonstrate your skills confidently, and keep abreast of the latest developments in Snowflake to stay competitive in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.