Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About Us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Tech Overview: Every time a guest enters a Target store or browses Target.com , they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 4,000 of engineers, data scientists, architects, coaches and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. Pyramid Overview: We create technology solutions for Target at an enterprise scale, unlocking value for our team members, guests, and suppliers. Team Members rely on us to manage $108B+ in revenue, care for our team of 450k+ (pay and benefits), enable 3M+ candidates applying to Target, access to every Target facility, and resolve challenges through our service centers. Guests rely on us every day to quickly and efficiently help them with inquiries such as credit card statements, finding a product, price match, returns, and much more. We ensure guests can leverage their Circle Card to get exclusive benefits. Guests and B2B clients also rely on us to buy gift cards for that special occasion. Suppliers rely on us to assist with financial management, sourcing and procurement processes, compliance and risk assessment processes, coordinating visits for field engineers, providing support for Target+ and Roundel Partners, as well as supporting revenue growth to accelerate vendor acquisition and onboarding for partners. About you: 4 year degree or equivalent experience 3+ years of software development experience with at least one full cycle implementation Design and develop SAP BOBJ reports and Dashboards using Web Intelligence (WebI) Design and develop reports using SAC (SAP Analytics Cloud), Analysis for Office. Develop and maintain SAP HANA Calculation Views using best practices for performance and data modeling. Apply best practices in HANA modeling, including usage of input parameters, variables, and calculated columns. Monitor and optimize HANA performance, including indexing, partitioning, and use of proper join types. Support ETL/ELT processes and ensure data accuracy and consistency in models. Analyze complex business requirements and translate them into robust data models and reporting solutions. Optimize data flows and troubleshoot performance issues in SAP HANA and BOBJ environments. Document data models, ETL processes, and reporting solutions. Support UAT and production deployment, along with post-go-live support and issue resolution. Configure and maintain SLT replication jobs between SAP systems Monitor and optimize SLT performance to ensure efficient data replication. Troubleshoot SLT-related issues and provide support to end-users. Integrate data from multiple sources including SAP ECC/S4, SLT, BW, and external systems Collaborate with business stakeholders and BI teams to ensure alignment with organizational goals and data governance. Know More About Us Here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about Target in India At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. About the Role As a Senior RBX Data Specialist at Target in India, involves the end-to-end management of data, encompassing building and maintaining pipelines through ETL/ELT and data modeling, ensuring data accuracy and system performance, and resolving data flow issues. It also requires analyzing data to generate insights, creating visualizations for stakeholders, automating processes for efficiency, and effective collaboration across both business and technical teams. You will also answer ad-hoc questions from your business users by conducting quick analysis on relevant data, identify trends and correlations, and form hypotheses to explain the observations. Some of this will lead to bigger projects of increased complexity, where you will have to work as a part of a bigger team, but also independently execute specific tasks. Finally, you are expected to always adhere to project schedule and technical rigor as well as requirements for documentation, code versioning, etc Key Responsibilities Data Pipeline and MaintenanceMonitor data pipelines and warehousing systems to ensure optimal health and performance. Ensure data integrity and accuracy throughout the data lifecycle. Incident Management and ResolutionDrive the resolution of data incidents and document their causes and fixes, collaborating with teams to prevent recurrence. Automation and Process ImprovementIdentify and implement automation opportunities and Data Ops best practices to enhance the efficiency, reliability, and scalability of data processes. Collaboration and CommunicationWork closely with data teams and stakeholders, to understand data pipeline architecture and dependencies, ensuring timely and accurate data delivery while effectively communicating data issues and participating in relevant discussions. Data Quality and GovernanceImplement and enforce data quality standards, monitor metrics for improvement, and support data governance by ensuring policy compliance. Documentation and ReportingCreate and maintain clear and concise documentation of data pipelines, processes, and troubleshooting steps. Develop and generate reports on data operations performance and key metrics. Core responsibilities are described within this job description. Job duties may change at any time due to business needs. About You B.Tech / B.E. or equivalent (completed) degree 5+ years of relevant work experience Experience in Marketing/Customer/Loyalty/Retail analytics is preferable Exposure to A/B testing Familiarity with big data technologies, data languages and visualization tools Exposure to languages such as Python and R for data analysis and modelling Proficiency in SQL for data extraction, manipulation, and analysis, with experience in big data query frameworks such as Hive, Presto, SQL, or BigQuery Solid foundation knowledge in mathematics, statistics, and predictive modelling techniques, including Linear Regression, Logistic Regression, time-series models, and classification techniques. Ability to simplify complex technical and analytical methodologies for easier comprehension for broad audiences. Ability to identify process and tool improvements and implement change Excellent written and verbal English communication skills for Global working Motivation to initiate, build and maintain global partnerships Ability to function in group and/or individual settings. Willing and able to work from our office location (Bangalore HQ) as required by business needs and brand initiatives Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 month ago
6.0 - 10.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2393_JOB Date Opened 09/11/2024 Industry IT Services Job Type Work Experience 6-10 years Job Title Snowflake Engineer - Database Administraion City Bangalore South Province Karnataka Country India Postal Code 560066 Number of Positions 1 Locations-Pune, Bangalore, Hyderabad, Indore Contract duration 6 month Responsibilities - Must have experience working as a Snowflake Admin/Development in Data Warehouse, ETL, BI projects. - Must have prior experience with end to end implementation of Snowflake cloud data warehouse and end to end data warehouse implementations on-premise preferably on Oracle/Sql server. - Expertise in Snowflake - data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts - Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, - Zero copy clone, time travel and understand how to use these features - Expertise in deploying Snowflake features such as data sharing. - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python - Experience in Data Migration from RDBMS to Snowflake cloud data warehouse - Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling) - Experience with data security and data access controls and design- - Experience with AWS or Azure data storage and management technologies such as S3 and Blob - Build processes supporting data transformation, data structures, metadata, dependency and workload management- - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface. - Must have experience of Agile development methodologies. Good to have - CI/CD in Talend using Jenkins and Nexus. - TAC configuration with LDAP, Job servers, Log servers, database. - Job conductor, scheduler and monitoring. - GIT repository, creating user & roles and provide access to them. - Agile methodology and 24/7 Admin and Platform support. - Estimation of effort based on the requirement. - Strong written communication skills. Is effective and persuasive in both written and oral communication. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
6.0 - 10.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2384_JOB Date Opened 23/10/2024 Industry IT Services Job Type Work Experience 6-10 years Job Title Snowflake DBA City Bangalore South Province Karnataka Country India Postal Code 560066 Number of Positions 1 Contract duration 6 month Locations-Pune/Bangalore/hyderabad/Indore Responsibilities - Must have experience working as a Snowflake Admin/Development in Data Warehouse, ETL, BI projects. - Must have prior experience with end to end implementation of Snowflake cloud data warehouse and end to end data warehouse implementations on-premise preferably on Oracle/Sql server. - Expertise in Snowflake - data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts - Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, - Zero copy clone, time travel and understand how to use these features - Expertise in deploying Snowflake features such as data sharing. - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python - Experience in Data Migration from RDBMS to Snowflake cloud data warehouse - Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling) - Experience with data security and data access controls and design- - Experience with AWS or Azure data storage and management technologies such as S3 and Blob - Build processes supporting data transformation, data structures, metadata, dependency and workload management- - Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot. - Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface. - Must have experience of Agile development methodologies. Good to have - CI/CD in Talend using Jenkins and Nexus. - TAC configuration with LDAP, Job servers, Log servers, database. - Job conductor, scheduler and monitoring. - GIT repository, creating user & roles and provide access to them. - Agile methodology and 24/7 Admin and Platform support. - Estimation of effort based on the requirement. - Strong written communication skills. Is effective and persuasive in both written and oral communication. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
6.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2470_JOB Date Opened 03/05/2025 Industry IT Services Job Type Work Experience 6-10 years Job Title Sr. Data Engineer City Bangalore South Province Karnataka Country India Postal Code 560050 Number of Positions 1 Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
5.0 - 8.0 years
2 - 6 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
6.0 - 11.0 years
13 - 18 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices. Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka. Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry. Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap. Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture. Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership. Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools. Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders. Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data. Qualifications and Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems. Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred. Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles. Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution. Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling. Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation. Strong communication and collaboration skills. Ability to translate complex technical concepts into actionable business strategies. Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 1 month ago
6.0 - 10.0 years
12 - 18 Lacs
Hyderabad
Hybrid
Role & Responsibilities Role Overview: We are seeking a talented and forward-thinking Data Engineer for one of the large financial services GCC based in Hyderabad with responsibilities that include designing and constructing data pipelines, integrating data from multiple sources, developing scalable data solutions, optimizing data workflows, collaborating with cross-functional teams, implementing data governance practices, and ensuring data security and compliance. Technical Requirements: Proficiency in ETL, Batch, and Streaming Process Experience with BigQuery, Cloud Storage, and CloudSQL Strong programming skills in Python, SQL, and Apache Beam for data processing Understanding of data modeling and schema design for analytics Knowledge of data governance, security, and compliance in GCP Familiarity with machine learning workflows and integration with GCP ML tools Ability to optimize performance within data pipelines Functional Requirements: Ability to collaborate with Data Operations, Software Engineers, Data Scientists, and Business SMEs to develop Data Product Features Experience in leading and mentoring peers within an existing development team Strong communication skills to craft and communicate robust solutions Proficient in working with Engineering Leads, Enterprise and Data Architects, and Business Architects to build appropriate data foundations Willingness to work on contemporary data architecture in Public and Private Cloud environments This role offers a compelling opportunity for a seasoned Data Engineering to drive transformative cloud initiatives within the financial sector, leveraging unparalleled experience and expertise to deliver innovative cloud solutions that align with business imperatives and regulatory requirements. Qualification Engineering Grad / Postgraduate CRITERIA Proficient in ETL, Python, and Apache Beam for data processing efficiency. Demonstrated expertise in BigQuery, Cloud Storage, and CloudSQL utilization. Strong collaboration skills with cross-functional teams for data product development. Comprehensive knowledge of data governance, security, and compliance in GCP. Experienced in optimizing performance within data pipelines for efficiency. Relevant Experience: 6-9 years
Posted 1 month ago
6.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Overview We are looking for a experienced GCP BigQuery Lead to architect, develop, and optimize data solutions on Google Cloud Platform, with a strong focus on Big Query . role involves leading warehouse setup initiatives, collaborating with stakeholders, and ensuring scalable, secure, and high-performance data infrastructure. Responsibilities Lead the design and implementation of data pipelines using BigQuery , Datorama , Dataflow , and other GCP services. Architect and optimize data models and schemas to support analytics, reporting use cases. Implement best practices for performance tuning , partitioning , and cost optimization in BigQuery. Collaborate with business stakeholders to translate requirements into scalable data solutions. Ensure data quality, governance, and security across all big query data assets. Automate workflows using orchestration tools. Mentor junior resource and lead script reviews, documentation, and knowledge sharing. Qualifications 6+ years of experience in data analytics, with 3+ years on GCP and BigQuery. Strong proficiency in SQL , with experience in writing complex queries and optimizing performance. Hands-on experience with ETL/ELT tools and frameworks. Deep understanding of data warehousing , dimensional modeling , and data lake architectures . Good Exposure with data governance , lineage , and metadata management . GCP data engineer certification is a plus. Experience with BI tools (e.g., Looker, Power BI). Good communication and team lead skills.
Posted 1 month ago
7.0 - 12.0 years
15 - 19 Lacs
Pune
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What You’ll Do Design & Implement an enterprise data management strategy aligned with business process, focusing on data models designs, database development standards and data management frameworks Develop and maintain data management and governance frameworks to ensure data quality, consistency and compliance for different Discover domains such as Multi omics, In Vivo, Ex Vivo, In Vitro datasets Design and develop scalable cloud based (AWS or Azure) solutions following enterprise standards Design robust data model for semi-structured/structured datasets by following various modelling techniques Design & implement of complex ETL data-pipelines to handle various semi-structured/structured datasets coming from Labs and scientific platforms Work with LAB ecosystems (ELNs, LIMS, CDS etc) to build Integration & data solutions around them Collaborate with various stakeholders, including data scientists, researchers, and IT, to optimize data utilization and align data strategies with organizational goals Stay abreast of the latest trends in Data management technologies and introduce innovative approaches to data analysis and pipeline development. Lead projects from conception to completion, ensuring alignment with enterprise goals and standards. Communicate complex technical details effectively to both technical and non-technical stakeholders. What You’ll Bring Minimum of 7+ years of hands-on experience in developing data management solutions solving problems in Discovery/ Research domain Advanced knowledge of data management tools and frameworks, such as SQL/NoSQL, ETL/ELT tools, and data visualization tools across various private clouds Strong experience in following Cloud based DBMS/Data warehouse offerings – AWS Redshift, AWS RDS/Aurora, Snowflake, Databricks ETL tools – Cloud based tools Well versed with different cloud computing offerings in AWS and Azure Well aware of Industry followed data security and governance norms Building API Integration layers b/w multiple systems Hands-on experience with data platforms technologies likeDatabricks, AWS, Snowflake, HPC ( certifications will be a plus) Strong programming skills in languages such as Python, R Strong organizational and leadership skills. Bachelor’s or Master’s degree in Computational Biology, Computer Science, or a related field. Ph.D. is a plus. Preferred/Good To Have MLOps expertise leveraging ML Platforms like Dataiku, Databricks, Sagemaker Experience with Other technologies like Data Sharing (eg. Starburst), Data Virtualization (Denodo), API Management (mulesoft etc) Cloud Solution Architect certification (like AWS SA Professional or others) Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com
Posted 1 month ago
7.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Role & responsibilities 3+ years of experience with Snowflake (Snowpipe, Streams, Tasks) Strong proficiency in SQL for high-performance data transformations Hands-on experience building ELT pipelines using cloud-native tools Proficiency in dbt for data modeling and workflow automation Python skills (Pandas, PySpark, SQLAlchemy) for data processing Experience with orchestration tools like Airflow or Prefect Preferred candidate profile Hands-on with Python, including libraries like Pandas, PySpark, or SQLAlchemy. Experience with data cataloging, metadata manage1nent, and column-level lineage. Exposure to BI tools like Tableau, or Power Bl. Certifications: Snowflake SnowPro Core Certification preferred. Contact details: Sindhu@iflowonline.com or 9154984810
Posted 1 month ago
10.0 - 15.0 years
35 - 40 Lacs
Pune
Work from Office
The Impact of a Lead Software Engineer - Data to Coupa: The Lead Software Engineer - Data is a pivotal role at Coupa, responsible for leading the architecture, design, and optimization of the data infrastructure that powers our business. This individual will collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to build and maintain scalable, high-performance data solutions. The Lead Software Engineer - Data will drive the development of robust data architectures, capable of handling large and complex datasets, while ensuring data integrity, security, and governance. Additionally, this role will provide technical leadership, mentoring engineers, and defining best practices to ensure the efficiency and scalability of our data systems. Suitable candidates will have a strong background in data engineering, with experience in data modeling, ETL development, and data pipeline optimization. They will also have deep expertise in programming languages such as Python, Java, or Scala, along with hands-on experience in cloud-based data storage and processing technologies such as AWS, Azure, or GCP. The impact of a skilled Lead Software Engineer - Data at Coupa will be significant, ensuring that our platform is powered by scalable, reliable, and high-quality data solutions. This role will enable the company to deliver innovative, data-driven solutions to our customers and partners. Their work will contribute to the overall success and growth of Coupa, solidifying its position as a leader in cloud-based spend management solutions. What You ll Do: Lead and drive the development and optimization of scalable data architectures and pipelines. Design and implement best-in-class ETL/ELT solutions for real-time and batch data processing. Optimize Spark clusters for performance, reliability, and cost efficiency, implementing monitoring solutions to identify bottlenecks. Architect and maintain cloud-based data infrastructure leveraging AWS, Azure, or GCP services. Ensure data security and governance, enforcing compliance with industry standards and regulations. Develop and promote best practices for data modeling, processing, and analytics. Mentor and guide a team of data engineers, fostering a culture of innovation and technical excellence. Collaborate with stakeholders, including Product, Engineering, and Data Science teams, to support data-driven decision-making. Automate and streamline data ingestion, transformation, and analytics processes to enhance efficiency. Develop real-time and batch data processing solutions, integrating structured and unstructured data sources. What you will bring to Coupa: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Expertise in processing large workloads and complex code on Spark clusters. Expertise in setting up monitoring for Spark clusters and driving optimization based on insights and findings. Experience in designing and implementing scalable Data Warehouse solutions to support analytical and reporting needs. Experience with API development and design with REST or GraphQL. Experience building and optimizing big data data pipelines, architectures, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with 10+ years of experience in a in Data Engineering with at least 3+ years in a Technical Lead role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with object-oriented/object function scripting languages: Python, Java, C++, .net, etc. Expertise in Python is a must. Experience with big data tools: Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift. Working knowledge of stream-processing systems: Storm, Spark-Streaming, etc.
Posted 1 month ago
6.0 - 9.0 years
20 - 25 Lacs
Hyderabad
Hybrid
Role & re Design, build, and measure complex ELT jobs to process disparate data sources and form a high integrity, high quality, clean data asset. Executes and provides feedback for data modeling policies, procedure, processes, and standards. Assists with capturing and documenting system flow and other pertinent technical information about data, database design, and systems. Develop data quality standards and tools for ensuring accuracy. Work across departments to understand new data patterns. Translate high-level business requirements into technical specs sponsibilities Bachelors degree in computer science or engineering. years of experience with data analytics, data modeling, and database design. years of experience with Vertica. years of coding and scripting (Python, Java, Scala) and design experience. years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool. Preferred candidate profile
Posted 1 month ago
7.0 - 12.0 years
3 - 7 Lacs
Gurugram
Work from Office
AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHubs, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHubs, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.
Posted 2 months ago
15.0 - 20.0 years
20 - 25 Lacs
Mumbai
Work from Office
Who are you: Delivery Manager for Large Data transformation engagements, especially for the Financial Sector managing the contractual client deliverables with customer satisfaction while mitigating risks. What you’ll do: 1.End to End Delivery Orchestration corresponding to the contract. 2.Project Planning (WBS, Goal Setting), estimation and Schedule management 3.Client communication and stakeholder Management 4.Identifications of risks and working out the mitigating action plan. 5.Issue Management and assignment. 6. Management of Client Commitments 7.Financial Management (Revenue & Gross Profit) 8.1. Base Account Growth Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 1.15+ years of experience in executing Banking Data Transformation projects 2.Of the 15+ years of experience, a minimum of 7 years of having worked as a functional team leader and at least 5 years as a technical team member. 3.Strong understanding of Data Lakehouse, Data Mesh and related architectures applicable in the Financial Services Sector and delivery risks associated. 4.Understanding of the following: a.ETL/ELT processes, and data pipeline orchestration b.Industry Data Model c.Understanding of Banking Book and performance measurement in each of them d.Understanding of accuracy of data needed in for various analysis e.Understanding data reconciliation. 5.Agile Project Management Delivery Methodology and project tracking methods and tools Preferred technical and professional experience 1.Knowledge of various industry data stacks used by Banking Industry 2.Experience in vendor management and contracting 3.Knowledge of data security frameworks and experience of supporting regulatory audits in the financial sector 4.PMP, PRINCE2, or Agile/Scrum certification
Posted 2 months ago
4.0 - 6.0 years
7 - 14 Lacs
Udaipur, Kolkata, Jaipur
Hybrid
Senior Data Engineer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur,Kolkata Job Description: We are looking for a highly skilled and experienced Data Engineer with 46 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using SQL and Python Collaborate with data analysts, data scientists, and product teams to understand data needs Optimize queries and data models for performance and reliability Integrate data from various sources, including APIs, internal databases, and third-party systems Monitor and troubleshoot data pipelines to ensure data quality and integrity Document processes, data flows, and system architecture Participate in code reviews and contribute to a culture of continuous improvement Required Skills: 4-6 years of experience in data engineering, data architecture, or backend development with a focus on data Strong command of SQL for data transformation and performance tuning Experience with Python (e.g., pandas, Spark, ADF) Solid understanding of ETL/ELT processes and data pipeline orchestration Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) Basic Programming Skills Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. Exposure to enterprise solutions (e.g., Databricks, Synapse) Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) Background in real-time data streaming and event-driven architectures Understanding of data governance, security, and compliance best practices Prior experience working in agile development environment Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm
Posted 2 months ago
7.0 - 12.0 years
20 - 22 Lacs
Bengaluru
Remote
Collaborate with senior stakeholders to gather requirements, address constraints, and craft adaptable data architectures. Convert business needs into blueprints, guide agile teams, maintain quality data pipelines, and drive continuous improvements. Required Candidate profile 7+yrs in data roles(Data Architect/Engineer). Skilled in modelling (incl. Data Vault 2.0), Snowflake, SQL/Python, ETL/ELT, CI/CD, data mesh, governance & APIs. Agile; strong stakeholder & comm skills. Perks and benefits As per industry standards
Posted 2 months ago
1.0 - 4.0 years
2 - 5 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE You will play a key role in a regulatory submission content automation initiative which will modernize and digitize the regulatory submission process, positioning Amgen as a leader in regulatory innovation. The initiative leverages state-of-the-art technologies, including Generative AI, Structured Content Management, and integrated data to automate the creation, review, and approval of regulatory content. ? The role is responsible for sourcing and analyzing data for this initiative and support designing, building, and maintaining the data pipelines to drive business actions and automation . This role involves working with Operations source systems, find the right data sources, standardize data sets, supporting data governance to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities Ensure reliable , secure and compliant operating environment. Identify , extract, and integrate required business data from Operations systems residing in modern cloud-based architectures. Design, develop, test and maintain scalable data pipelines, ensuring data quality via ETL/ELT processes. Schedul e and manag e workflows the ensure pipeline s run on schedule and are monitored for failures. Implement data integration solutions and manage end-to-end pipeline projects, including scope, timelines, and risk. Reverse-engineer schemas and explore source system tables to map local representations of target business concepts. Navigate application UIs and backends to gain business domain knowledge and detect data inconsistencies. Break down information models into fine-grained, business-contextualized data components. Work closely with cross-functional teams, including product teams, data architects, and business SMEs, to understand requirements and design solutions. Collaborate with data scientists to develop pipelines that meet dynamic business needs across regions. Create and maintain data models, dictionaries, and documentation to ensure accuracy and consistency. Adhere to SOPs, GDEs , and best practices for coding, testing, and reusable component design. Basic Qualifications and Experience Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: Must-Have Skills: Hands on experience with data practices, technologies , and platforms , such as Databricks, Python, Prophecy, Gitlab, LucidChart etc Proficiency in data analysis tools ( eg. SQL) and experience with data sourcing tools Excellent problem-solving skills and the ability to work with large, complex datasets U nderstanding of data governance frameworks, tools, and best practices. Knowledge of and experience with data standards (FAIR) and protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python /R , Databricks, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks ) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 2 months ago
5.0 - 10.0 years
4 - 8 Lacs
Chennai, Guindy
Work from Office
Data ELT Engineer Chennai - Guindy, India Information Technology 17075 Overview We are looking for a highly skilled DataELT Engineer to architect and implement data solutions that support our enterprise analytics and real-time decision-making capabilities. This role combines data modeling expertise with hands-on experience building and managing ELT pipelines across diverse data sources. You will work with Snowflake , AWS Glue , and Apache Kafka to ingest, transform, and stream both batch and real-time data, ensuring high data quality and performance across systems. If you have a passion for data architecture and scalable engineering, we want to hear from you. Responsibilities Design, build, and maintain scalable ELT pipelines into Snowflake from diverse sources including relational databases (SQL Server, MySQL, Oracle) and SaaS platforms. Utilize AWS Glue for data extraction and transformation, and Kafka for real-time streaming ingestion. Model data using dimensional and normalized techniques to support analytics and business intelligence workloads. Handle large-scale batch processing jobs and implement real-time streaming solutions. Ensure data quality, consistency, and governance across pipelines. Collaborate with data analysts, data scientists, and business stakeholders to align models with organizational needs. Monitor, troubleshoot, and optimize pipeline performance and reliability. Requirements 5+ years of experience in data engineering and data modeling. Strong proficiency with SQL and data modeling techniques (star, snowflake schemas). Hands-on experience with Snowflake data platform. Proficiency with AWS Glue (ETL jobs, crawlers, workflows). Experience using Apache Kafka for streaming data integration. Experience with batch and streaming data processing. Familiarity with orchestration tools (e.g., Airflow, Step Functions) is a plus. Strong understanding of data governance and best practices in data architecture. Excellent problem-solving skills and communication abilities.
Posted 2 months ago
7.0 - 12.0 years
18 - 20 Lacs
Hyderabad
Work from Office
We are Hiring Senior Data Management Specialist Level 3 for US based IT Company based in Hyderabad. Candidates with experience in Data Management and Snowflake can app. Job Title : Senior Data Management Specialist Level 3 Location : Hyderabad Experience : 7+ Years CTC : 18 LPA - 20 LPA Working shift : Day shift Description: We are looking for an experienced and highly skilled Data Management Specialist (Level 3) to contribute to enterprise-level data solutions with an emphasis on cloud data platforms and modern data engineering tools . The ideal candidate will possess hands-on expertise with Snowflake , combined with a solid foundation in data integration, modeling , and cloud-based database technologies . This role is a key part of a high-impact data team dedicated to ensuring the quality, availability, and governance of enterprise data assets. As a Level 3 specialist , the individual will be expected to lead and execute complex data management tasks , while collaborating closely with data architects, analysts, and business stakeholders . Key Responsibilities: Design, develop, and maintain scalable data pipelines and integrations using Snowflake and other cloud data technologies Handle structured and unstructured data to support analytics, reporting, and operational workloads Develop and optimize complex SQL queries and data transformation logic Collaborate with data stewards and governance teams to uphold data quality, consistency, and compliance Perform data profiling, cleansing, and validation across multiple source systems Support ETL/ELT development and data migration initiatives using tools like Informatica, Talend , or dbt Design and maintain data models , including star and snowflake schemas Ensure performance tuning, monitoring, and troubleshooting of Snowflake environments Document data processes, data lineage, and metadata within the data governance framework Act as a technical SME , offering guidance and support to junior team members Required Skills & Qualifications: Minimum 5 years of experience in data engineering, data management, or similar roles Strong hands-on experience with Snowflake (development, administration, performance optimization) Proficiency in SQL , data modeling , and cloud-native data architectures Experience working on cloud platforms such as AWS, Azure , or Google Cloud (with Snowflake) Familiarity with ETL tools like Informatica, Talend , or dbt Solid understanding of data governance , metadata management , and data quality best practices Experience with Python or Shell scripting for automation and data operations Strong analytical and problem-solving abilities Excellent communication and documentation skills For further assistance contact/whatsapp : 9354909512 or write to pankhuri@gist.org.in
Posted 2 months ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Role Overview:- The Senior Tech Lead - Snowflake leads the design, development, and optimization of advanced data warehousing solutions. The jobholder has extensive experience with Snowflake, data architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities:- Lead the design and implementation of Snowflake-based data architectures and pipelines. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Snowflake environments. Stay updated on the latest Snowflake technologies and industry trends. Key Technical Skills & Responsibilities Minimum 7 + years of experience of designing and developing data warehouse / big data applications Must be able to lead data product development using Streamlit and Cortex Deep understanding of relational as well as NoSQL data stores, data modeling methods and approaches (star and snowflake, dimensional modeling) Good communication skill. Must have experience of solution architecture using Snowflake Must have experience of working with Snowflake data platform, its utilities (SnowSQL, SnowPipe etc) and its features (time travel, support to semi-structured data etc) Must have experience of migrating on premise data warehouse to Snowflake cloud data platform Must have experience of working with any cloud platform, AWS | Azure | GCP Experience of developing accelerators (using Python, Java etc) to expedite the migration to Snowflake Must be good with Python and PySpark (including Snowpark) for data pipeline building. Must have expereince of working with streaming data sources and Kafka. Extensive experience of developing ANSI SQL queries and Snowflake compatible stored procedures Snowflake certification is preferred Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field. Extensive experience with Snowflake, SQL, and data modeling. Snowflake certification (e.g., SnowPro Core Certification). Experience with cloud platforms like AWS, Azure, or GCP. Strong understanding of ETL/ELT processes and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering: Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 2 months ago
5.0 - 10.0 years
0 - 1 Lacs
Ahmedabad, Chennai, Bengaluru
Hybrid
Job Summary: We are seeking an experienced Snowflake Data Engineer to design, develop, and optimize data pipelines and data architecture using the Snowflake cloud data platform. The ideal candidate will have a strong background in data warehousing, ETL/ELT processes, and cloud platforms, with a focus on creating scalable and high-performance solutions for data integration and analytics. --- Key Responsibilities: * Design and implement data ingestion, transformation, and loading processes (ETL/ELT) using Snowflake. * Build and maintain scalable data pipelines using tools such as dbt, Apache Airflow, or similar orchestration tools. * Optimize data storage and query performance in Snowflake using best practices in clustering, partitioning, and caching. * Develop and maintain data models (dimensional/star schema) to support business intelligence and analytics initiatives. * Collaborate with data analysts, scientists, and business stakeholders to gather data requirements and translate them into technical solutions. * Manage Snowflake environments including security (roles, users, privileges), performance tuning, and resource monitoring. * Integrate data from multiple sources including cloud storage (AWS S3, Azure Blob), APIs, third-party platforms, and streaming data. * Ensure data quality, reliability, and governance through testing and validation strategies. * Document data flows, definitions, processes, and architecture. --- Required Skills and Qualifications: * 3+ years of experience as a Data Engineer or in a similar role working with large-scale data systems. * 2+ years of hands-on experience with Snowflake including SnowSQL, Snowpipe, Streams, Tasks, and Time Travel. * Strong experience in SQL and performance tuning for complex queries and large datasets. * Proficiency with ETL/ELT tools such as dbt, Apache NiFi, Talend, Informatica, or custom scripts. * Solid understanding of data modeling concepts (star schema, snowflake schema, normalization, etc.). * Experience with cloud platforms (AWS, Azure, or GCP), particularly using services like S3, Redshift, Lambda, Azure Data Factory, etc. * Familiarity with Python or Java or Scala for data manipulation and pipeline development. * Experience with CI/CD processes and tools like Git, Jenkins, or Azure DevOps. * Knowledge of data governance, data quality, and data security best practices. * Bachelor's degree in Computer Science, Information Systems, or a related field. --- Preferred Qualifications: * Snowflake SnowPro Core Certification or Advanced Architect Certification. * Experience integrating BI tools like Tableau, Power BI, or Looker with Snowflake. * Familiarity with real-time streaming technologies (Kafka, Kinesis, etc.). * Knowledge of Data Vault 2.0 or other advanced data modeling methodologies. * Experience with data cataloging and metadata management tools (e.g., Alation, Collibra). * Exposure to machine learning pipelines and data science workflows is a plus.
Posted 2 months ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS ,Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL,Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance)
Posted 2 months ago
3.0 - 8.0 years
16 - 18 Lacs
Hyderabad
Work from Office
We are Hiring Data Management Specialist Level 2 for a US based IT Compnay based in Hyderabad. Job Title : Data Management Specialist Level 2 Location : Hyderabad Experience : 3+ Years CTC : 16 LPA - 18 LPA Working shift : Day shift We are seeking a Level 2 Data Management Specialist to join our data team and support the development, maintenance, and optimization of data pipelines and cloud-based data platforms. The ideal candidate will have hands-on experience with Snowflake , along with a solid foundation in SQL , data integration, and cloud data technologies. As a mid-level contributor, this position will collaborate closely with senior data engineers and business analysts to deliver reliable, high-quality data solutions for reporting, analytics, and operational needs. You will help develop scalable data workflows, resolve data quality issues, and ensure compliance with data governance practices. Key Responsibilities: Design, build, and maintain scalable data pipelines using Snowflake and SQL-based transformation logic Assist in developing and optimizing data models to support reporting and business intelligence efforts Write efficient SQL queries for data extraction, transformation, and analysis Collaborate with cross-functional teams to gather data requirements and implement dependable data solutions Support data quality checks and validation procedures to ensure data integrity and consistency Contribute to data integration tasks across various sources, including relational databases and cloud storage Document technical workflows, data definitions, and transformation logic for reference and compliance Monitor the performance of data processes and help troubleshoot workflow issues Required Skills & Qualifications: 24 years of experience in data engineering or data management roles Proficiency in Snowflake for data development or analytics Strong SQL skills and a solid grasp of relational database concepts Familiarity with ETL/ELT tools such as Informatica, Talend , or dbt Basic understanding of cloud platforms like AWS, Azure , or GCP Knowledge of data modeling techniques (e.g., star and snowflake schemas) Excellent attention to detail, strong analytical thinking, and problem-solving skills Effective team player with the ability to clearly communicate technical concepts Preferred Skills: Exposure to data governance or data quality frameworks Experience working in the banking or financial services industry Basic scripting skills in Python or Shell Familiarity with Agile/Scrum methodologies Experience using Git or other version control tools For further assistance contact/whatsapp : 9354909521 9354909512 or write to pankhuri@gist.org.in
Posted 2 months ago
5.0 - 8.0 years
20 - 25 Lacs
Chennai
Remote
Execute and support R&D activities leveraging metadata from multiple databases, ETL/ELT products, reporting tools etc. Develop, test, and deploy Python and SQL-based solutions to automate and optimize operational processes. Data Analysis & Reporting Required Candidate profile Provide hands-on programming support for AI-driven initiatives. Mastery in Python Programming Advanced SQL Proficiency analytical methodologies, statistical concepts, and data visualization techniques
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough