Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be joining the Visualization Centre of Excellence (CoE) at the Bain Capability Network (BCN), where you will work closely with global Bain case teams, Bain Partners, and end-clients to provide them with data analytics and business intelligence support using advanced tools such as SQL, Python, Azure, AWS, Tableau, PowerBI, and Alteryx. The CoE serves as a central hub for all case requests related to converting data into insightful visualizations. **Key Responsibilities:** - Responsible for end-to-end handling of the entire process, including requirement gathering, data cleaning, processing, and automation - Design, build, and maintain infrastructure and systems for Extraction, Transformation, and Storage of large datasets for analysis - Work as an expert on specific platforms/tools/languages (Snowflake/Azure/AWS/Python/SQL) either individually or by leading teams to design and deliver impactful insights - Gather requirements and business process knowledge to transform data according to end users" needs - Investigate data to identify issues within ETL pipelines, propose solutions, and ensure scalable and maintainable data architecture - Apply knowledge of data analysis tools like SnowPark, Azure Data Bricks, AWS Athena, Alteryx, etc. to support case teams with KPI analysis - Prepare documentation for reference and support product development by building scalable and automated pipelines and algorithms - Manage internal and external stakeholders, provide expertise in data management and tool proficiency, and lead client/case team calls to communicate insights effectively - Stay updated on statistical, database, and data warehousing tools and techniques - Provide feedback, conduct performance discussions, and assist in team management activities **Qualifications Required:** - Graduation/Post-Graduation from a top-tier college with 5-7 years of relevant work experience in Data Management, Business Intelligence, or Business Analytics - Concentration in a quantitative discipline such as Statistics, Mathematics, Engineering, Computer Science, Econometrics, Business Analytics, or Market Research preferred - Minimum 5+ years of Database development experience on Cloud-based platform Snowflake - Hands-on experience with ETL processing via SnowPark and Snowflake - Proficiency in Python, Advanced SQL queries, Azure, AWS, and data modeling principles - Motivated, collaborative team player with excellent communication skills and the ability to prioritize projects and drive them to completion under tight deadlines - Ability to generate realistic answers, recommend solutions, and manage multiple competing priorities effectively **Good to Have:** - Experience in building Custom GPTs and AI Agents - Knowledge of Environment creation and management - Familiarity with CI/CD pipelines: GitHub, Docker, and containerization Please note that the company, Bain & Company, is consistently recognized as one of the world's best places to work, fostering diversity, inclusion, and collaboration to build extraordinary teams where individuals can thrive both professionally and personally.,
Posted 6 days ago
10.0 - 14.0 years
0 Lacs
delhi
On-site
As a Senior Snowflake Data Engineer at Bright Vision Technologies, your role will involve designing, building, and maintaining large-scale data warehouses using Snowflake. You will be expected to have expertise in DBT (Data Build Tool) and Python. Key Responsibilities: - Designing, building, and maintaining large-scale data warehouses using Snowflake - Utilizing expertise in DBT and Python for data processing, transformation, and loading tasks - Implementing ETL transformations using snowpark - Collaborating with cross-functional teams for effective communication and problem-solving - Working with Agile development methodologies and version control systems like Git - Utilizing data visualization tools such as Tableau, Power BI, or D3.js for data analysis and reporting Qualifications Required: - Bachelor's degree in computer science, Data Science, or a related field - 10 years of experience in data engineering, data warehousing, or related fields - Strong understanding of Snowflake architecture, data modeling, and data warehousing concepts - Proficiency in SQL, including Snowflake's SQL dialect - Experience with DBT for data transformation, testing, and deployment - Strong programming skills in Python for data processing tasks - Familiarity with data visualization tools for effective data analysis - Excellent analytical and problem-solving skills for troubleshooting complex data issues - Experience with Agile development methodologies and version control systems like Git At Bright Vision Technologies, we offer proven success in H1B filings, end-to-end support, top clients, transparent processes, and green card sponsorship for eligible candidates. Join us on your path to a successful career in the U.S.! Location: Shastri Nagar N, Delhi,
Posted 6 days ago
10.0 - 15.0 years
18 - 20 Lacs
noida, gurugram
Work from Office
Lead design & implementation of scalable data pipelines using Snowflake & Databricks. Drive data architecture, governance. Build ETL/ELT, optimize models, mentor team, ensure security, compliance.strong Snowflake, Databricks, SQL, Python Required Candidate profile Experienced Data Analytics Lead skilled in Snowflake, Databricks, SQL, Python. Proven leader in designing scalable pipelines, data governance, ETL/ELT, and team mentoring.
Posted 1 week ago
5.0 - 10.0 years
15 - 19 Lacs
gurugram
Work from Office
Strong skills in Java 8+, Web application frameworks such as Spring Boot, and RESTful API development. Familiarity with AWS Toolsets, including but not limited to SQS, Lambda, DynamoDB, RDS, S3, Kinesis, Cloud formation Demonstrated experience in designing, building, and documenting customer facing RESTful APIs Demonstrable ability to read high-level business requirements and drive clarifying questions. Demonstrable ability to engage in self-paced continuous learning to upskill, with the collaboration of engineering leaders. Demonstrable ability to manage your own time and prioritize how you spend your time most effectively. Strong skills with the full lifecycle of development, from analysis to install into production
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
remote, india
On-site
Where Data Does More. Join the Snowflake team. We are looking for a Solutions Consultant to be part of our Professional Services team to deploy cloud products and services for our customers. This person must be a hands-on, self-starter who loves solving innovative problems in a fast-paced, agile environment. The ideal candidate will have the insight to connect a specific business problem and Snowflake's solution and communicate that connection and vision to various technical and executive audiences. The person we're looking for shares our passion for reinventing the data platform and thrives in a dynamic environment. That means having the flexibility and willingness to jump in and get it done to make Snowflake and our customers successful. It means keeping up to date on the ever-evolving data and analytics technologies, and working collaboratively with a broad range of people inside and outside the company to be an authoritative resource for Snowflake and its customers. AS A SOLUTIONS CONSULTANT AT SNOWFLAKE, YOU WILL: Be responsible for delivering exceptional outcomes for our teams and customers during our migration projects. You will engage with customers and APJ consulting teams to assist with the migration from legacy environments into Snowflake and or Snowpark In addition to customer engagements, you will work with our internal team to provide requirements for our Snowconvert utility, based on project experiences. This ensures that our tooling is continuously improved based on our implementation experience. OUR IDEAL SOLUTIONS CONSULTANT WILL HAVE: University degree in computer science, engineering, mathematics or related fields, or equivalent experience Minimum 5 years of experience as a solutions architect, data architect, database administrator, or data engineer. Willingness to forge ahead to deliver outcomes for customers in a new arena, with a new product set Passion for solving complex customer problems Ability to learn new technology and build repeatable solutions/processes Ability to anticipate project roadblocks and have mitigation plans in-hand Experience in Data Warehousing, Business Intelligence, AI/ML, migration, or Cloud projects Experience in building realtime and batch data pipelines using Spark and Scala Proven track-record of results with multi-party, multi-year digital transformation engagements Proven ability to communicate and translate effectively across multiple groups from design and engineering to client executives and technical leaders Strong organizational skills, ability to work independently and manage multiple projects simultaneously Outstanding skills presenting to both technical and executive audiences, whether impromptu on a whiteboard or using presentations Hands-on experience in a technical role (SQL, data warehousing, cloud data, analytics, or ML/AI) Extensive knowledge of and experience with large-scale database technology (e.g. Snowflake, Netezza, Exadata, Teradata, Greenplum, etc.) Software development experience with Python, Java , Spark and other Scripting languages Proficiency in implementing data security measures, access controls, and design within the Snowflake platform. Internal and/or external consulting experience. Skillset and Delivery Activities: Have the ability to outline the architecture of Spark and Scala environments Guide customers on architecting and building data engineering pipelines on Snowflake Run workshops and design sessions with stakeholders and customers Create repeatable processes and documentation as a result of customer engagement Scripting using python and shell scripts for ETL workflow Develop best practices, including ensuring knowledge transfer so that customers are properly enabled and are able to extend the capabilities of Snowflake on their own Weigh in on and develop frameworks for Distributed Computing, Apache Spark, PySpark, Python, HBase, Kafka, REST based API, and Machine Learning as part of our tools development (Snowconvert) and overall modernization processes Snowflake is growing fast, and we're scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information:
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, Python and SQL Responsibilities Snowflake ETL Developer with extensive experience in SQL and Python. relevant experience and skilled in - Python and SQL Experience in SQL including tables, views, joins, CTE. Exposure to data movement using python libraries Exposure to logging, lambda functions, OOPS and connecting with different database sources - able to fetch data. Exposure python libraries like pandas, etc. Experienced in SQL performance tuning and troubleshooting Knowledge on CI/CD using Jenkins/Bamboo Experience in Agile/Scrum based project executions with exposure to JIRA or similar Good to have experience in orchestration using Airflow. Good to have python experience with hands-on experience on pandas, snowpark , data crunching, connecting other RDBMS. Very good SQL expertise in Joins, Analytical functions. Cloud knowledge on AWS services like S3, Lambda, Athena or Blob Team player, collaborative approach and excellent communication skills Candidate should be good in communication and articulation Qualifications we seek in you Minimum qualifications Graduate/B Tech/MCA Knowledge/exp. of Banking or Capital Markets is an added advantage for this position Why join Genpact Lead AI-first transformation - Build and scale AI solutions that redefine industries Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career &mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up . Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
nagpur, maharashtra
On-site
As a Data Engineer at our organization, you will play a crucial role in expanding and optimizing our data and data pipeline architecture. Your responsibilities will include optimizing data flow and collection for cross-functional teams, supporting software developers, database architects, data analysts, and data scientists on data initiatives, and ensuring optimal data delivery architecture throughout ongoing projects. You will be expected to be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The ideal candidate will be experienced in data pipeline building and data wrangling, with a passion for optimizing data systems and building them from the ground up. You will have the opportunity to work on redesigning our company's data architecture to support our next generation of products and data initiatives. As a Snowflake Data Engineer, you must have proficiency in SQL, Python, Snowflake, data modeling, ETL, and Snowpark. Additionally, experience with DBT (Data Build Tool), API integration (AWS Lambda), Git, and AWS S3 integration is considered beneficial. You will be responsible for crafting and optimizing complex SQL queries and stored procedures, working with Snowflake cloud data warehousing service, understanding ETL processes, leveraging DBT for data transformation and modeling, integrating Snowflake with AWS S3 storage, and utilizing Snowpark for data processing. Your expertise in troubleshooting data-related issues, creating technical documentation, designing data schemas, and utilizing Python programming for data processing will be essential for success in this role. We are seeking dynamic individuals who are energetic and passionate about their work to join our innovative organization. We offer high-impact careers and growth opportunities across global locations, with a collaborative work environment that fosters continuous learning and development. Join us at NICE to thrive, learn, and grow while enjoying generous benefits and perks.,
Posted 3 weeks ago
4.0 - 7.0 years
15 - 25 Lacs
bengaluru
Work from Office
Job Summary We are looking for a skilled Snowflake Developer with hands-on experience in Python, SQL, and Snowpark to join our data engineering team. You will be responsible for designing and building scalable data pipelines, developing Snowpark-based data applications, and enabling advanced analytics solutions on the Snowflake Data Cloud platform. Key Responsibilities Develop and maintain robust, scalable, and high-performance data pipelines using Snowflake SQL, Python, and Snowpark. Use Snowpark (Python API) to build data engineering and data science workflows within the Snowflake environment. Perform advanced data transformation, modeling, and optimization to support business reporting and analytics. Tune queries and warehouse usage for cost and performance optimization. Leverage Azure data services for data ingestion, orchestration, observability etc. Implement best practices for data governance, security, and data quality within Snowflake. Required Skills 4+ years of hands-on experience with Snowflake development and administration. Strong command of SQL for complex queries, data modeling, and transformations. Proficient in Python, especially in the context of data engineering and Snowpark usage. Working experience with Snowpark for building data pipelines or analytics applications. Understanding of data warehouse architecture, ELT/ETL processes, and cloud data platforms. Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Preferred Qualifications Experience with cloud platforms like AWS, Azure, or GCP. Knowledge of orchestration tools such as Airflow or dbt. Familiarity with data security and role-based access control (RBAC) in Snowflake. Snowflake certifications are a plus. DBT tool knowledge Soft Skills Strong analytical and problem-solving capabilities. Ability to work independently and in a collaborative team environment. Excellent communication and documentation skills.
Posted 3 weeks ago
4.0 - 7.0 years
4 - 7 Lacs
Bengaluru, Karnataka, India
On-site
We are seeking a proactive Senior Snowflake PySpark Developer to lead the design and maintenance of data pipelines in cloud environments. You will be responsible for building robust ETL processes using Snowflake, PySpark, SQL, and AWS Glue . This role requires strong expertise in data architecture, data modeling, and a collaborative mindset to work effectively with large datasets and cross-functional teams. Roles & Responsibilities: Design, build, and maintain data pipelines in cloud environments, with a focus on AWS . Utilize Snowflake, PySpark, SQL, and AWS Glue to perform ETL tasks. Work with large datasets, applying strong problem-solving skills to transform and analyze data. Collaborate effectively with cross-functional teams to meet project goals. Implement best practices in data architecture and data modeling. Skills Required: Strong expertise in Snowflake, Snowpark, PySpark, and SQL . Knowledge of data architecture, data modeling, and ETL processes . Experience in building and maintaining data pipelines in cloud environments, especially AWS . Familiarity with AWS Glue for ETL tasks. Excellent problem-solving and communication skills. Experience with other AWS services (e.g., S3, Redshift, Lambda) is a plus. Knowledge of additional programming languages like Python or Java is a plus. Familiarity with version control systems like Git is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Posted 1 month ago
4.0 - 7.0 years
4 - 7 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a proactive Senior Snowflake PySpark Developer to lead the design and maintenance of data pipelines in cloud environments. You will be responsible for building robust ETL processes using Snowflake, PySpark, SQL, and AWS Glue . This role requires strong expertise in data architecture, data modeling, and a collaborative mindset to work effectively with large datasets and cross-functional teams. Roles & Responsibilities: Design, build, and maintain data pipelines in cloud environments, with a focus on AWS . Utilize Snowflake, PySpark, SQL, and AWS Glue to perform ETL tasks. Work with large datasets, applying strong problem-solving skills to transform and analyze data. Collaborate effectively with cross-functional teams to meet project goals. Implement best practices in data architecture and data modeling. Skills Required: Strong expertise in Snowflake, Snowpark, PySpark, and SQL . Knowledge of data architecture, data modeling, and ETL processes . Experience in building and maintaining data pipelines in cloud environments, especially AWS . Familiarity with AWS Glue for ETL tasks. Excellent problem-solving and communication skills. Experience with other AWS services (e.g., S3, Redshift, Lambda) is a plus. Knowledge of additional programming languages like Python or Java is a plus. Familiarity with version control systems like Git is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Posted 1 month ago
4.0 - 7.0 years
4 - 7 Lacs
Delhi, India
On-site
We are seeking a proactive Senior Snowflake PySpark Developer to lead the design and maintenance of data pipelines in cloud environments. You will be responsible for building robust ETL processes using Snowflake, PySpark, SQL, and AWS Glue . This role requires strong expertise in data architecture, data modeling, and a collaborative mindset to work effectively with large datasets and cross-functional teams. Roles & Responsibilities: Design, build, and maintain data pipelines in cloud environments, with a focus on AWS . Utilize Snowflake, PySpark, SQL, and AWS Glue to perform ETL tasks. Work with large datasets, applying strong problem-solving skills to transform and analyze data. Collaborate effectively with cross-functional teams to meet project goals. Implement best practices in data architecture and data modeling. Skills Required: Strong expertise in Snowflake, Snowpark, PySpark, and SQL . Knowledge of data architecture, data modeling, and ETL processes . Experience in building and maintaining data pipelines in cloud environments, especially AWS . Familiarity with AWS Glue for ETL tasks. Excellent problem-solving and communication skills. Experience with other AWS services (e.g., S3, Redshift, Lambda) is a plus. Knowledge of additional programming languages like Python or Java is a plus. Familiarity with version control systems like Git is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a skilled Snowflake Developer with over 7 years of experience, you will be responsible for designing, developing, and optimizing Snowflake data solutions. Your expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration will be crucial in building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Your key responsibilities will include: - Designing and developing Snowflake databases, schemas, tables, and views following best practices. - Writing complex SQL queries, stored procedures, and UDFs for data transformation. - Optimizing query performance using clustering, partitioning, and materialized views. - Implementing Snowflake features such as Time Travel, Zero-Copy Cloning, Streams & Tasks. - Building and maintaining ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. - Integrating Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). - Developing CDC (Change Data Capture) and real-time data processing solutions. - Designing star schema, snowflake schema, and data vault models in Snowflake. - Implementing data sharing, secure views, and dynamic data masking. - Ensuring data quality, consistency, and governance across Snowflake environments. - Monitoring and optimizing Snowflake warehouse performance (scaling, caching, resource usage). - Troubleshooting data pipeline failures, latency issues, and query bottlenecks. - Collaborating with data analysts, BI teams, and business stakeholders to deliver data solutions. - Documenting data flows, architecture, and technical specifications. - Mentoring junior developers on Snowflake best practices. Required Skills & Qualifications: - 7+ years in database development, data warehousing, or ETL. - 4+ years of hands-on Snowflake development experience. - Strong SQL or Python skills for data processing. - Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). - Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). - Certifications: SnowPro Core Certification (preferred). Preferred Skills: - Familiarity with data governance and metadata management. - Familiarity with DBT, Airflow, SSIS & IICS. - Knowledge of CI/CD pipelines (Azure DevOps).,
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
delhi
On-site
As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),
Posted 1 month ago
6.0 - 11.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 month ago
7.0 - 12.0 years
22 - 27 Lacs
Hyderabad, Pune, Mumbai (All Areas)
Work from Office
Job Description - Snowflake Developer Experience: 7+ years Location: India, Hybrid Employment Type: Full-time Job Summary We are looking for a Snowflake Developer with 7+ years of experience to design, develop, and maintain our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake SQL, data modeling, and ETL/ELT processes to build efficient and scalable data solutions. Key Responsibilities 1. Snowflake Development & Implementation Design and develop Snowflake databases, schemas, tables, and views Write and optimize complex SQL queries, stored procedures, and UDFs Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks) Manage virtual warehouses, resource monitors, and cost optimization 2. Data Pipeline & Integration Build and maintain ETL/ELT pipelines using Snowflake and tools like Snowpark, Python, or Spark Integrate Snowflake with cloud storage (S3, Blob Storage) and data sources (APIs) Develop data ingestion processes (batch and real-time) using Snowpipe 3. Performance Tuning & Optimization Optimize query performance through clustering, partitioning, and indexing Monitor and troubleshoot data pipelines and warehouse performance Implement caching strategies and materialized views for faster analytics 4. Data Modeling & Governance Design star schema, snowflake schema, and normalized data models Implement data security (RBAC, dynamic data masking, row-level security) Ensure data quality, documentation, and metadata management 5. Collaboration & Support Work with analysts, BI teams, and business users to deliver data solutions Document technical specifications and data flows Provide support and troubleshooting for Snowflake-related issues Required Skills & Qualifications 7+ years in database development, data warehousing, or ETL 3+ years of hands-on Snowflake development experience Strong SQL and scripting (Python, Bash) skills Experience with Snowflake utilities (SnowSQL, Snowsight) Knowledge of cloud platforms (AWS, Azure) and data integration tools SnowPro Core Certification (preferred but not required) Experience with Coalesce DBT , Airflow, or other data orchestration tools Familiarity with CI/CD pipelines and DevOps practices Knowledge of data visualization tools (Power BI, Tableau)
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Engineer at Ethoca, a Mastercard Company in Pune, India, you will play a crucial role in driving data enablement and exploring big data solutions within our technology landscape. Your responsibilities will include designing, developing, and optimizing batch and real-time data pipelines using tools such as Snowflake, Snowpark, Python, and PySpark. You will also be involved in building data transformation workflows, implementing CI/CD pipelines, and administering the Snowflake platform to ensure performance tuning, access management, and platform scalability. Collaboration with stakeholders to understand data requirements and deliver reliable data solutions will be a key part of your role. Your expertise in cloud-based database infrastructure, SQL development, and building scalable data models using tools like Power BI will be essential in supporting business analytics and dashboarding. Additionally, you will be responsible for real-time data streaming pipelines, data observability practices, and planning and executing deployments, migrations, and upgrades across data platforms while minimizing service impacts. To be successful in this role, you should have a strong background in computer science or software engineering, along with deep hands-on experience with Snowflake, Snowpark, Python, PySpark, and CI/CD tooling. Familiarity with Schema Change, Java JDK, Spring & Springboot framework, Databricks, and real-time data processing is desirable. You should also possess excellent problem-solving and analytical skills, as well as effective written and verbal communication abilities for collaborating across technical and non-technical teams. You will be part of a high-performing team that is committed to making systems resilient and easily maintainable on the cloud. If you are looking for a challenging role that allows you to leverage cutting-edge software and development skills while working with massive data volumes, this position at Ethoca may be the right fit for you.,
Posted 2 months ago
2.0 - 5.0 years
7 - 17 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design and implement scalable data models using Snowflake to support business intelligence and analytics solutions. Implement ETL/ELT solutions that involve complex business transformations. Handle end-to-end Data warehousing solutions Migrate the data from legacy systems to Snowflake systems Write complex SQL queries for extracting, transforming, and loading data, ensuring high performance and accuracy. Optimize the SnowSQL queries for better processing speeds Integrate Snowflake with 3rd party applications Use any ETL/ELT technology Implement data security policies, including user access control and data masking, to maintain compliance with organizational standards. Document solutions and data flows. Skills & Qualifications: Experience: 2+ years of experience in data engineering, with a focus on Snowflake. Proficient in SQL and Snowflake-specific SQL functions . Experience with ETL/ELT tools and cloud data integrations. Technical Skills: Strong understanding of Snowflake architecture, features, and best practices. Experience in using Snowpark, Snowpipe, Streamlit Experience in using Dynamic tables is good to have Familiarity with cloud platforms (AWS, Azure, or GCP) and other cloud-based data technologies. Experience with data modeling concepts like star schema, snowflake schema, and data partitioning. Experience with Snowflakes Time Travel, Streams, and Tasks features Experience in data pipeline orchestration. Knowledge of Python or Java for scripting and automation. Knowledge of Snowflake pipelines is good to have Knowledge of data governance practices, including security, compliance, and data lineage.
Posted 2 months ago
5.0 - 10.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 2 months ago
5.0 - 10.0 years
22 - 27 Lacs
Chennai, Mumbai (All Areas)
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 2 months ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
________________________________________ Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant- Senior Data Engineer - Snowflake, AWS, Cortex AI & Horizon Catalog Role Summary: We are seeking an experienced Senior Data Engineer with deep expertise in modernizing Data & Analytics platforms on Snowflake, leveraging AWS services, Cortex AI, and Horizon Catalog for high-performance, AI-driven data management. The role involves designing scalable data architectures, integrating AI-powered automation, and optimizing data governance, lineage, and analytics frameworks. Key Responsibilities: . Architect & modernize enterprise Data & Analytics platforms on Snowflake, utilizing AWS, Cortex AI, and Horizon Catalog. . Design and optimize Snowflake-based Lakehouse architectures, integrating AWS services (S3, Redshift, Glue, Lambda, EMR, etc.). . Leverage Cortex AI for AI-driven data automation, predictive analytics, and workflow orchestration. . Implement Horizon Catalog for enhanced data lineage, governance, metadata management, and security. . Develop high-performance ETL/ELT pipelines, integrating Snowflake with AWS and AI-powered automation frameworks. . Utilize Snowflake&rsquos native capabilities like Snowpark, Streams, Tasks, and Dynamic Tables for real-time data processing. . Establish data quality automation, lineage tracking, and AI-enhanced data governance strategies. . Collaborate with data scientists, ML engineers, and business stakeholders to drive AI-led data initiatives. . Continuously evaluate emerging AI and cloud-based data engineering technologies to improve efficiency and innovation. Qualifications we seek in you! Minimum Qualifications . experience in Data Engineering, AI-powered automation, and cloud-based analytics. . Expertise in Snowflake (Warehousing, Snowpark, Streams, Tasks, Dynamic Tables). . Strong experience with AWS services (S3, Redshift, Glue, Lambda, EMR). . Deep understanding of Cortex AI for AI-driven data engineering automation. . Proficiency in Horizon Catalog for metadata management, lineage tracking, and data governance. . Advanced knowledge of SQL, Python, and Scala for large-scale data processing. . Experience in modernizing Data & Analytics platforms and migrating on-premises solutions to Snowflake. . Strong expertise in Data Quality, AI-driven Observability, and ModelOps for data workflows. . Familiarity with Vector Databases & Retrieval-Augmented Generation (RAG) architectures for AI-powered analytics. . Excellent leadership, problem-solving, and stakeholder collaboration skills. Preferred Skills: . Experience with Knowledge Graphs (Neo4J, TigerGraph) for structured enterprise data systems. . Exposure to Kubernetes, Terraform, and CI/CD pipelines for scalable cloud deployments. . Background in streaming technologies (Kafka, Kinesis, AWS MSK, Snowflake Snowpipe). Why Join Us . Lead Data & AI platform modernization initiatives using Snowflake, AWS, Cortex AI, and Horizon Catalog. . Work on cutting-edge AI-driven automation for cloud-native data architectures. . Competitive salary, career progression, and an opportunity to shape next-gen AI-powered data solutions. ________________________________________Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 months ago
6.0 - 8.0 years
6 - 8 Lacs
Navi Mumbai, Maharashtra, India
On-site
We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams
Posted 3 months ago
6.0 - 8.0 years
6 - 8 Lacs
Delhi, India
On-site
We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams
Posted 3 months ago
6.0 - 8.0 years
6 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
We are looking for a Senior Big Data Engineer with deep experience in building scalable, high-performance data processing pipelines using Snowflake (Snowpark) and the Hadoop ecosystem . You'll design and implement batch and streaming data workflows, transform complex datasets, and optimize infrastructure to power analytics and data science solutions. Key Responsibilities: Design, develop, and maintain end-to-end scalable data pipelines for high-volume batch and real-time use cases. Implement advanced data transformations using Spark, Snowpark, Pig , and Sqoop . Process large-scale datasets from varied sources using tools across the Hadoop ecosystem . Optimize data storage and retrieval in HBase , Hive , and other NoSQL stores. Collaborate closely with data scientists, analysts, and business stakeholders to enable data-driven decision-making. Ensure data quality, integrity, and compliance with enterprise security and governance standards. Tune and troubleshoot distributed data applications for performance and efficiency. Must-Have Skills: 5+ years in Data Engineering or Big Data roles Expertise in: Snowflake (Snowpark) Apache Spark MapReduce, Hadoop Sqoop, Pig, HBase Strong knowledge of: ETL/ELT pipeline design Distributed computing principles Big Data architecture & performance tuning Proven experience handling large-scale data ingestion , processing, and transformation Nice-to-Have Skills: Workflow orchestration with Apache Airflow or Oozie Cloud experience: AWS, Azure, or GCP Proficiency in Python or Scala Familiarity with CI/CD pipelines , Git, and DevOps environments Soft Skills: Strong problem-solving and analytical mindset Excellent communication and documentation abilities Ability to work independently and within cross-functional Agile teams
Posted 3 months ago
8.0 - 10.0 years
8 - 10 Lacs
Navi Mumbai, Maharashtra, India
On-site
We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning
Posted 3 months ago
8.0 - 10.0 years
8 - 10 Lacs
Delhi, India
On-site
We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase . You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms. Key Responsibilities: Design and develop high-performance, scalable data pipelines for batch and streaming processing. Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop , and related tools. Manage large-scale data ingestion from various structured and unstructured data sources. Work with Hadoop ecosystem components including MapReduce, HBase, Hive, and HDFS . Optimize storage and query performance for high-throughput, low-latency systems. Collaborate with data scientists, analysts, and product teams to define and implement end-to-end data solutions. Ensure data integrity, quality, governance, and security across all systems. Monitor, troubleshoot, and fine-tune the performance of distributed systems and jobs. Must-Have Skills: Strong hands-on experience with: Snowflake & Snowpark Apache Spark Hadoop, MapReduce Pig, Sqoop, HBase, Hive Expertise in data ingestion, transformation, and pipeline orchestration In-depth knowledge of distributed computing and big data architecture Experience in data modeling, storage optimization , and query performance tuning
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |