Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
7 - 12 Lacs
Mumbai, Hyderabad, Bengaluru
Hybrid
Your day at NTT DATA The Software Applications Development Engineer is a seasoned subject matter expert, responsible for developing new applications and improving upon existing applications based on the needs of the internal organization and or external clients. What you'll be doing Yrs. Of Exp: 5 Yrs. Data Engineer- Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. Work closely with Data modeller to ensure data models support the solution design Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. Develop documentation and artefacts to support projects.
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
Okta is looking for a Sr. Marketing Data Operations Analyst to join the Marketing Data Operations & Technology team. Reporting into the Sr. Manager, Marketing Technology this new role will support the management and optimisation of Okta s marketing data across our core marketing technology stack.Okta has a large marketing technology and data estate spanning an audience of millions which includes inputs from a range of systems, including Okta s CRM system (Salesforce), marketing automation platform (Adobe Marketo Engage), and connected infrastructure which includes tools spanning sales outreach (Outreach), ABM (6Sense, Folloze), and data enrichment (Clay, Clearbit).The Sr. Marketing Data Operations Analyst will contribute to a number of critical areas supporting Okta s drive towards operational excellence across its marketing technology estate. This includes driving overall database health and improving data quality, the management of integrations in the data operations function and the conducting of ongoing data maintenance as well as processes to support these efforts.The role is integral to delivering a program of technical efficiency, operational excellence and a supporting framework of data-driven insights from within the Marketing Data Operations & Technology team. This role requires strong analytical skills, attention to detail and the ability to collaborate with cross functional teams. As such the successful candidate will be able to demonstrate a data-driven marketing mindset and have demonstrable experience of working within a data operations function or role to support and drive marketing performance. Job Duties And Responsibilities : Manage and drive data cleansing initiatives and enrichment processes to improve data accuracy and completeness. Administer and maintain key data enrichment marketing technology tools, with a focus on Clay and Clearbit, ensuring optimal configuration and utilization. Partner closely with key Marketing stakeholders to create and manage new use cases and workflows within our data enrichment tools - creating business requirement docs, technical architectural flows, and monitoring/measuring business impact. Partner closely with the greater Marketing Operations team to manage data updates, maintenance, and logic within 6sense to support effective ABM strategies. Identifying data gaps, discrepancies and issues and, where appropriate, owning the design and implementation of processes to improve these issues. Assist with manual data load fixes across various platforms (Salesforce, Marketo, etc), ensuring data integrity and resolving data discrepancies. Provide miscellaneous data operations tool support and fulfill tool provisioning requests, ensuring users have the necessary access and functionality. Drive and collaborate on the creation of a Marketing Data Ops Data Dictionary, ensuring data governance and clarity across tools and systems. Skills & Experience: Required3+ years of experience working in a data operations function or role supporting go-to-market teams RequiredExperience of working with Salesforce (preference for candidates who have worked directly with systems integrations with Salesforce. Salesforce certifications are a plus). Candidates should be comfortable with the core Salesforce Object models RequiredExperience of working with business stakeholders to understand existing workflows, business requirements and translate this into solution design and delivery. RequiredStrong critical thinker and problem solver, with an eye for detail PreferredKnowledge of SQL (for analytics) and comfortable querying data-warehouses for analytical purposes (such as SnowFlake) PreferredCandidates with experience of integrating with analytics and data orchestrations platforms (Openprise, FiveTran, Tableau, Datorama, Google Data Studio/Looker Studio). Preferred Candidates with exposure to a range of Marketing Technology applicationsFor example: Sales outreach platformsSuch as Outreach / SalesLoft. ABM platformsSuch as 6Sense / Folloze. Optimization/ Personalization platformssuch as Intellimize / Optimizely Data Enrichment ToolsSuch as Leadspace/ Clay / ZoomInfo / Clearbit This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment.
Posted 1 week ago
5.0 - 10.0 years
25 - 35 Lacs
Chennai
Work from Office
Job Summary: We are seeking an experienced Manager - Data Engineer to join our dynamic team. In this role, you will be responsible for designing, implementing, and maintaining data infrastructure on Azure with extensive focus on Azure Databricks. You will work hand in hand with our analytics team to support data-driven decision making across different external clients in a variety of industries. SCOPE OF WORK Design, build, and maintain scalable data pipelines using Azure Data Factory (ADF), Fivetran and other Azure services. Administer, monitor, and troubleshoot SQL Server databases, ensuring high performance and availability. Develop and optimize SQL queries and stored procedures to support data transformation and retrieval. Implement and maintain data storage solutions in Azure, including Azure Databricks, Azure SQL Database, Azure Blob Storage, and Data Lakes. Collaborate with business analysts, clients and stakeholders to deliver insightful reports and dashboards using Power BI. Develop scripts to automate data processing tasks using languages such as Python, PowerShell, or similar. Ensure data security and compliance with industry standards and organizational policies. Stay updated with the latest technologies and trends in Azure cloud services and data engineering. Desired experience in healthcare data analytics, including familiarity with healthcare data models such as Encounter based models, or Claims focused models or Manufacturing data analytics or Utility Analytics IDEAL CANDIDATE PROFILE Bachelors degree in Computer Science, Engineering, Information Technology, or related field. At least 5-8 years of experience in data engineering with a strong focus on Microsoft Azure and Azure Databricks. Proven expertise in SQL Server database administration and development. Experience in building and optimizing data pipelines, architectures, and data sets on Azure. Experience with dbt and Fivetran Familiarity with Azure AI and LLM’s including Azure OpenAI Proficiency in Power BI for creating reports and dashboards. Strong scripting skills in Python, PowerShell, or other relevant languages. Familiarity with other Azure data services (e.g., Azure Synapse Analytics, Azure Blob..etc). Knowledge of data modeling, ETL processes, and data warehousing concepts. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to collaborate effectively with various teams and understand business requirements. Certifications in Azure Data Engineering or related fields. Experience with machine learning and data science projects (huge plus). Knowledge of additional BI tools and data integration platforms
Posted 1 week ago
7.0 - 12.0 years
25 - 30 Lacs
Coimbatore
Remote
Role & responsibilities SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. Required Skills This job has no supervisory responsibilities. Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at prioritizing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau)
Posted 2 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Hyderabad
Remote
Lead Data Engineer with Health Care Domain Role & responsibilities Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote SUMMARY: Data Engineer will be responsible for ETL and documentation in building data warehouse and analytics capabilities. Additionally, maintain existing systems/processes and develop new features, along with reviewing, presenting and implementing performance improvements. Duties and Responsibilities Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies. Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented. Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes. Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies. Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults . Required Skills This job has no supervisory responsibilities. Need strong experience with Snowflake and Azure Data Factory(ADF). Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work. 5+ years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker Excellent communicator Well-developed interpersonal skills Good at priori zing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source so ware platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g. Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)
Posted 2 weeks ago
5.0 - 10.0 years
4 - 9 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Advanced understanding of AWS services Understanding of Cloud based services (GitHub, ServiceNow, Orca, Datadog, Broadcom, Fivetran) Hands on experience with Release Management and deployment Advanced understanding of Linux Administration (log files, command line, system services, custom and managed package installations). Knowledge of network protocols, security and compliance Strong knowledge of scripting (Python, PHP, Bash) Knowledge of application integration technologies (API, Middleware, Webhooks)
Posted 2 weeks ago
6.0 - 11.0 years
20 - 25 Lacs
Noida, Mumbai
Work from Office
Responsibilities: Act as Data domain expert for Snowflake in a collaborative environment to provide demonstrated understanding of data management best practices and patterns. Design and implement robust data architectures to meet and support business requirements leveraging Snowflake platform capabilities. Develop and enforce data modeling standards and best practices for Snowflake environments. Develop, optimize, and maintain Snowflake data warehouses. Leverage Snowflake features such as clustering, materialized views, and semi-structured data processing to enhance data solutions. Ensure data architecture solutions meet performance, security, and scalability requirements. Stay current with the latest developments and features in Snowflake and related technologies, continually enhancing our data capabilities. Collaborate with cross-functional teams to gather business requirements, translate them into effective data solutions in Snowflake and provide data-driven insights. Stay updated with the latest trends and advancements in data architecture and Snowflake technologies. Provide mentorship and guidance to junior data engineers and architects. Troubleshoot and resolve data architecture-related issues effectively. Skills Requirement: 5+ years of proven experience as a Data Engineer with 3+ years as Data Architect. Proficiency in Snowflake with Hands-on experience with Snowflake features such as clustering, materialized views, and semi-structured data processing. Experience in designing and building manual or auto ingestion data pipeline using Snowpipe. Design and Develop automated monitoring processes on Snowflake using combination of Python, PySpark, Bash with SnowSQL. SnowSQL Experience in developing stored Procedures writing Queries to analyze and transform data Working experience on ETL tools like Fivetran, DBT labs, MuleSoft Expertise in Snowflake concepts like setting up Resource monitors, RBAC controls, scalable virtual warehouse, SQL performance tuning, zero copy clone, time travel and automating them. Excellent problem-solving skills and attention to detail. Effective communication and collaboration abilities. Relevant certifications (e.g., SnowPro Core / Advanced) are a must have. Must have expertise in AWS. Azure, Salesforce Platform as a Service (PAAS) model and its integration with Snowflake to load/unload data. Strong communication and exceptional team player with effective problem-solving skills Educational Qualification Required: Masters degree in Business Management (MBA / PGDM) / Bachelor's degree in computer science, Information Technology, or related field.
Posted 2 weeks ago
8.0 - 13.0 years
25 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Looking for Cloud Engineering and Operations Specialist deep Understanding of AWS services, Cloud based services (GitHub, ServiceNow, Orca, Datadog, Broadcom, Five Tran) Hands on experience with Release Management and deployment
Posted 2 weeks ago
3.0 - 8.0 years
7 - 17 Lacs
Pune
Work from Office
vConstruct, a Pune-based Construction Technology company is seeking a Data Engineer for its Data Science and Analytics team, a close-knit group of analysts and engineers supporting all data aspects of the business. You will be responsible for designing, developing, and maintaining our data infrastructure, ensuring data integrity, and supporting various data-driven projects. You will work closely with cross-functional teams to integrate, process, and manage data from various sources, enabling business insights and enhancing operational efficiency. Responsibilities Design, develop, and maintain robust, scalable data pipelines and ETL/ELT processes to efficiently ingest, transform, and store data from diverse sources. Collaborate with cross-functional teams to design, implement, and sustain data-driven solutions that optimize data flow and system integration. Develop and maintain pipelines to move data in real-time (streaming), on-demand, and batch modeswhether inbound to a central data warehouse, outbound to other systems, or point-to-pointfocusing on security, reusability, and data quality. Implement pipelines with comprehensive error-handling mechanisms that are visible to both technical and functional teams. Ensure optimized pipeline performance with timely data delivery, including appropriate alerts and notifications. Adhere to data engineering best practices for code management and automated deployments, incorporating validation and test automation across all data engineering efforts. Perform debugging, application issue resolution, root cause analysis, and assist in proactive/preventive maintenance. Collaborate with the extended data team to define and enforce standards, guidelines, and data models that ensure data quality and promote best practices. Write and execute complete testing plans, protocols, and documentation for assigned portions of the data system or components; identify defects and create solutions for issues with code and integration into data system architecture. Work closely with data analysts, business users, and developers to ensure the accuracy, reliability, and performance of data solutions. Monitor data performance, troubleshooting issues, and optimize existing solutions. Create and maintain technical documentation related to data architecture, integration flows, and processes. Organize and lead discussions with business and operational data stakeholders to understand requirements and deliver solutions. Partner with analysts, developers, and business users to build data solutions that are scalable, maintainable, and aligned with business objectives. Qualifications 3 to 6 years of experience as a Data Engineer, with a focus on building scalable data solutions. Over 3 years of experience in scripting languages such as Python for data processing, automation, and ETL development. 3+ years of hands-on experience working with Snowflake. 3+ years of experience with data integration tools such as Azure Data Factory, Fivetran, or Matillion. Strong experience in writing complex, highly optimized SQL queries on large datasets (3+ years). Deep expertise in SQL, with a focus on database performance tuning and optimization. Experience working with data platforms like Snowflake, Azure Synapse, or Microsoft Fabric. Proven experience integrating APIs and handling diverse data sources. Ability to understand, consume, and utilize APIs, JSON, and web services for building data pipelines. Experience designing and implementing data pipelines using cloud platforms such as Azure or AWS. Familiarity with orchestration tools like Apache Airflow or equivalent. Experience with CI/CD practices and automation in data engineering workflows. Knowledge of dbt or similar tools for data transformation is a plus. Familiarity with Power BI or other data visualization tools is a plus. Strong problem-solving skills with the ability to troubleshoot complex data issues. Excellent communication skills and a collaborative mindset to work effectively in team environments. Education Bachelors or Masters degree in Computer Science/Information technology or related field. Equivalent academic and work experience can be considered. About vConstruct : vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great things, great teams, great buildings, great relationshipsDPR is a truly great company. For more information, please visit www.dpr.com
Posted 2 weeks ago
10.0 - 12.0 years
1 - 1 Lacs
Hyderabad
Hybrid
Role: Lead Data Engineer Experience: 10+ years Contract: 6+ months Job Summary: We are seeking an experienced and results-oriented Lead Data Engineer to drive the design, development, and optimization of enterprise data solutions. This onsite role requires deep expertise in FiveTran, Snowflake, SQL, Python, and data modeling, as well as a demonstrated ability to lead teams and mentor both Data Engineers and BI Engineers. The role will play a critical part in shaping the data architecture, improving analytics readiness, and enabling self-service business intelligence through scalable star schema designs. Key Responsibilities: Lead end-to-end data engineering efforts, including architecture, ingestion, transformation, and delivery. Architect and implement FiveTran-based ingestion pipelines and Snowflake data models. Create optimized Star Schemas to support analytics, self-service BI, and KPI reporting. Analyze and interpret existing report documentation and KPIs to guide modeling and transformation strategies. Design and implement efficient, scalable data workflows using SQL and Python. Review and extend existing reusable data engineering templates and frameworks. Provide technical leadership and mentorship to Data Engineers and BI Engineers, ensuring best practices in coding, modeling, performance tuning, and documentation. Collaborate with business stakeholders to gather requirements and translate them into scalable data solutions. Work closely with BI teams to enable robust reporting and dashboarding capabilities. Required Skills: 7+ years of hands-on data engineering experience, with 2+ years in a technical leadership or lead role. Deep expertise in FiveTran, Snowflake, and SQL development. Proficiency in Python for data transformation and orchestration. Strong understanding of data warehousing principles, including star schema design and dimensional modeling. Experience in analysing business KPIs and reports to influence data model design. Demonstrated ability to mentor both Data Engineers and BI Engineers and provide architectural guidance. Excellent problem-solving, communication, and stakeholder management skills Share CV to: Careers@rwavesoftech.com
Posted 2 weeks ago
5.0 - 10.0 years
5 - 10 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Role & responsibilities Design, develop, and optimize scalable data pipelines for ETL/ELT processes. Develop and maintain Python-based data processing scripts and automation tools. Write and optimize complex SQL queries (preferably in Snowflake) for data transformation and analytics. Experience with Jenkins or other CI/CD tools. Experience developing with Snowflake as the data platform. Experience with ETL/ELT tools (preferably Fivetran, dbt). Implement version control best practices using Git or other tools to manage code changes. Collaborate with cross-functional teams (analysts, product managers, and engineers) to understand business needs and translate them into technical data solutions. Ensure data integrity, security, and governance across multiple data sources. Optimize query performance and database architecture for efficiency and scalability. Lead troubleshooting and debugging efforts for data-related issues. Document data workflows, architectures, and best practices to ensure maintainability and knowledge sharing. Preferred candidate profile 5+ years of experience in Data Engineering, Software Engineering, or a related field. Bachelors or masters degree in computer science, Computer Engineering, or a related discipline High proficiency in SQL (preferably Snowflake) for data modeling, performance tuning, and optimization. Strong expertise in Python for data processing and automation. Experience with Git or other version control tools in a collaborative development environment. Strong communication skills and ability to collaborate with cross-functional teams for requirements gathering and solution design. Experience working with large-scale, distributed data systems and cloud data warehouse.
Posted 3 weeks ago
5 - 7 years
15 - 25 Lacs
Pune, Mumbai (All Areas)
Hybrid
DUTIES AND RESPONSIBILITIES: Build ETL (extract, transform, and loading) jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce, and AWS technologies Monitoring active ETL jobs in production. Build out data lineage artifacts to ensure all current and future systems are properly documented • Assist with the build out design/mapping documentation to ensure development is clear and testable for QA and UAT purposes Assess current and future data transformation needs to recommend, develop, and train new data integration tool technologies Discover efficiencies with shared data processes and batch schedules to help ensure no redundancy and smooth operations Assist the Data Quality Analyst to implement checks and balances across all jobs to ensure data quality throughout the entire environment for current and future batch jobs. Hands-on experience in developing and implementing large-scale data warehouses, Business Intelligence and MDM solutions, including Data Lakes/Data Vaults. SUPERVISORY RESPONSIBILITIES: This job has no supervisory responsibilities. QUALIFICATIONS: Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling or data engineering work 3-5 years experience with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks Hands-on experience with ETL tools (e.g Informatica, Talend, dbt, Azure Data Factory) • Experience working in the healthcare industry with PHI/PII Creative, lateral, and critical thinker • Excellent communicator Well-developed interpersonal skills Good at prioritizing tasks and time management Ability to describe, create and implement new solutions Experience with related or complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef) Knowledge / Hands-on experience with BI tools and reporting software (e.g. Cognos, Power BI, Tableau) Big Data stack (e.g.Snowflake(Snowpark), SPARK, MapReduce, Hadoop, Sqoop, Pig, HBase, Hive, Flume)fepa
Posted 1 month ago
8 - 13 years
12 - 22 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Work from Office
Greetings of The Day...!!! We have an URGENT on-rolls opening for the position of "Snowflake Architect" at One of our reputed clients for WFH. Name of the Company - Confidential Rolls - Onrolls Mode of Employment - FTE / Sub-Con / Contract Job Location - Remote Job Work Timings Night Shift – 06.00 pm to 03.00 am IST Nature of Work – Work from Home Working Days – 5 Days Weekly Educational Qualification - Bachelor's degree in computer science, BCA, engineering, or a related field. Salary – Maximum CTC Would be 23LPA (Salary & benefits package will be commensurate with experience and qualifications, PF, Medical Insurance cover available) Language Known - English, Hindi, & local language. Experience – 9 Years + of relevant experience in the same domain. Job Summary: We are seeking a highly skilled and experienced Snowflake Architect to lead the design, development, and implementation of scalable, secure, and high-performance data warehousing solutions on the Snowflake platform. The ideal candidate will possess deep expertise in data modelling, cloud architecture, and modern ELT frameworks. You will be responsible for architecting robust data pipelines, optimizing query performance, and ensuring enterprise-grade data governance and security. In this role, you will collaborate with data engineers, analysts, and business stakeholders to deliver efficient data solutions that drive informed decision-making across the organization. Key Responsibilities: Manage and maintain the Snowflake platform to ensure optimal performance and reliability. Collaborate with data engineers and analysts to design and implement data pipelines. Develop and optimize SQL queries for efficient data retrieval and manipulation. Create custom scripts and functions using JavaScript and Python to automate platform tasks. Troubleshoot platform issues and provide timely resolutions. Implement security best practices to protect data within the Snowflake platform. Stay updated on the latest Snowflake features and best practices to continuously improve platform performance. Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. Minimum of Nine years of experience in managing any Database platform. Proficiency in SQL for data querying and manipulation. Strong programming skills in JavaScript and Python. Experience in optimizing and tuning Snowflake for performance. Preferred Skills: Technical Expertise Cloud & Integration Performance & Optimization Security & Governance Soft Skills THE PERSON SHOULD BE WILLING TO JOIN IN 07-10 DAYS TIME OR IMMEDIATE JOINER. Request for interested candidates; Please share your updated resume with us below Email-ID executivehr@monalisammllp.com, also candidate can call or WhatsApp us at 9029895581. Current /Last Net in Hand - Salary will be offered based on the interview /Technical evaluation process -- Notice Period & LWD was/will be - Reason for Changing the job - Total Years of Experience in Specific Field – Please specify the location which you are from – Do you hold any offer from any other association - ? Regards, Monalisa Group of Services HR Department 9029895581 – Call / WhatsApp executivehr@monalisammllp.com
Posted 1 month ago
5 - 8 years
0 - 1 Lacs
Hyderabad
Hybrid
Job Title: Sr Data Engineer (Fivetran SDK Connector / High touch Developer). Work Location: Hyderabad. Years of Experience: 5 to 8 Years Shift Timings: 3 PM to 12 AM Skill Set: Fivetran and Fivetran SDK Development Expertise in Python for connector development Understanding of High touch. Roles & Responsibilities: Design, build, and maintain custom connectors using the Fivetran SDK Develop and manage Reverse ETL pipelines using Hightouch Integrate data from diverse APIs and source systems into cloud data warehouses Ensure data reliability, quality, and performance across pipelines Optimize SQL transformations and data workflows Collaborate with data engineers, analysts, and stakeholders to deliver high-quality data solutions Monitor and troubleshoot connector issues, ensuring robust logging and error handling. Other Specifications: 3 years of hands-on experience with Fivetran and Fivetran SDK Strong proficiency in Python, especially for SDK-based connector development Advanced SQL skills for data manipulation and transformation Practical experience with Hightouch for Reverse ETL use cases Experience with cloud data warehouses: Snowflake, BigQuery, or Redshift Strong understanding of REST APIs, webhooks, and authentication mechanisms Solid knowledge of ETL/ELT pipelines, data modeling, and data syncing Excellent problem-solving, debugging, and documentation skills.
Posted 1 month ago
1 - 4 years
3 - 6 Lacs
Pune
Work from Office
The Data Integration Engineer will play a key role in designing, building, and maintaining data integrations between core business systems such as Salesforce and SAP and our enterprise data warehouse on Snowflake. This position is ideal for an early-career professional (1 to 4 years of experience) eager to contribute to transformative data integration initiatives and learn in a collaborative, fast-paced environment. Duties Responsibilities: Collaborate with cross-functional teams to understand business requirements and translate them into data integration solutions. Develop and maintain ETL/ELT pipelines using modern tools like Informatica IDMC to connect source systems to Snowflake. Ensure data accuracy, consistency, and security in all integration workflows. Monitor, troubleshoot, and optimize data integration processes to meet performance and scalability goals. Support ongoing integration projects, including Salesforce and SAP data pipelines, while adhering to best practices in data governance. Document integration designs, workflows, and operational processes for effective knowledge sharing. Assist in implementing and improving data quality controls at the start of processes to ensure reliable outcomes. Stay informed about the latest developments in integration technologies and contribute to team learning and improvement. Qualifications: Required Skills and Experience: 5+ years of hands-on experience in data integration, ETL/ELT development, or data engineering. Proficiency in SQL and experience working with relational databases such as Snowflake, PostgreSQL, or SQL Server. Familiarity with data integration tools such as FiveTran, Informatica Intelligent Data Management Cloud (IDMC), or similar platforms. Basic understanding of cloud platforms like AWS, Azure, or GCP. Experience working with structured and unstructured data in varying formats (e.g., JSON, XML, CSV). Strong problem-solving skills and the ability to troubleshoot data integration issues effectively. Excellent verbal and written communication skills, with the ability to document technical solutions clearly. Preferred Skills and Experience: Exposure to integrating business systems such as Salesforce or SAP into data platforms. Knowledge of data warehousing concepts and hands-on experience with Snowflake. Familiarity with APIs, event-driven pipelines, and automation workflows. Understanding of data governance principles and data quality best practices. Education: Bachelors degree in Computer Science, Data Engineering, or a related field, or equivalent practical experience.
Posted 1 month ago
5 - 10 years
0 Lacs
Mysore, Bengaluru, Kochi
Hybrid
Open & Direct Walk-in Drive event | Hexaware technologies Snowflake & Python - Data Engineer/Architect in Bangalore, Karnataka on 12th April [Saturday] 2025 - Snowflake/ Python/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as a Data Engineer/Architect. We are hosting an Open Walk-in Drive in Bangalore, Karnataka on 12th April [Saturday] 2025 , and we believe your skills in Snowflake/ SNOWPARK / Python/ SQL & Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 12th April [Saturday] 2025 Experience 4 years to 12 years Time: 9.00 AM to 5 PM Venue: Hotel Grand Mercure Bangalore 12th Main Rd, 3rd Block, Koramangala 3 Block, Koramangala, Bengaluru, Karnataka 560034 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Work Location: Open (Hyderabad/Bangalore / Pune/ Mumbai/ Noida/ Dehradun/ Chennai/ Coimbatore) Key Skills and Experience: As a Data Engineer, we are looking for candidates who possess expertise in the following: SNOWFLAKE Python Fivetran SNOWPARK & SNOWPIPE SQL Pyspark/Spark DWH Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: 4 - 15 years of Total IT experience on any ETL/Snowflake cloud tool. Min 3 years of experience in Snowflake Min 3 year of experience in query and processing data using python. Strong SQL with experience in using Analytical functions, Materialized views, and Stored Procedures Experience in Data loading features of Snowflake like Stages, Streams, Tasks, and SNOWPIPE. Working knowledge on Processing Semi-Structured data What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********
Posted 2 months ago
5 - 10 years
0 Lacs
Pune, Nagpur, Mumbai (All Areas)
Hybrid
Open & Direct Walk-in Drive event | Hexaware technologies SNOWFLAKE & SNOWPARK Data Engineer/Architect in Pune, Maharashtra on 5th April [Saturday] 2025 - Snowflake/ Snowpark/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as a Data Engineer/Architect. We are hosting an Open Walk-in Drive in Pune, Maharashtra on 5th April [Saturday] 2025 , and we believe your skills in Snowflake/ SNOWPARK / Python/ SQL & Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 5th April [Saturday] 2025 Experience – 4 years to 12 years Time: 9.00 AM to 5 PM Venue: Hexaware Technologies Limited, Phase 3, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi, Pimpri-Chinchwad, Pune, Maharashtra 411057 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Work Location: Open (Hyderabad/Bangalore / Pune/ Mumbai/ Noida/ Dehradun/ Chennai/ Coimbatore) Key Skills and Experience: As a Data Engineer, we are looking for candidates who possess expertise in the following: SNOWFLAKE Python Fivetran SNOWPARK & SNOWPIPE SQL Pyspark/Spark DWH Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: 4 - 15 years of Total IT experience on any ETL/Snowflake cloud tool. Min 3 years of experience in Snowflake Min 3 year of experience in query and processing data using python. Strong SQL with experience in using Analytical functions, Materialized views, and Stored Procedures Experience in Data loading features of Snowflake like Stages, Streams, Tasks, and SNOWPIPE. Working knowledge on Processing Semi-Structured data What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********
Posted 2 months ago
5 - 10 years
17 - 30 Lacs
Hyderabad
Remote
At Mitratech, we are a team of technocrats focused on building world-class products that simplify operations in the Legal, Risk, Compliance, and HR functions of Fortune 100 companies. We are a close-knit, globally dispersed team that thrives in an ecosystem that supports individual excellence and takes pride in its diverse and inclusive work culture centered around great people practices, learning opportunities, and having fun! Our culture is the ideal blend of entrepreneurial spirit and enterprise investment, enabling the chance to move at a rapid pace with some of the most complex, leading-edge technologies available. Given our continued growth, we always have room for more intellect, energy, and enthusiasm - join our global team and see why it's so special to be a part of Mitratech! Job Description We are seeking a highly motivated and skilled Analytics Engineer to join our dynamic data team. The ideal candidate will possess a strong background in data engineering and analytics, with hands-on experience in modern analytics tools such as Airbyte, Fivetran, dbt, Snowflake, Airflow, etc. This role will be pivotal in transforming raw data into valuable insights, ensuring data integrity, and optimizing our data infrastructure to support the organization's data platform. Essential Duties & Responsibilities Data Integration and ETL Processes: Design, implement, and manage ETL pipelines using tools like Airbyte and Fivetran to ensure efficient and accurate data flow from various sources into our Snowflake data warehouse. Maintain and optimize existing data integration workflows to improve performance and scalability. Data Modeling and Transformation: Develop and maintain data models using dbt / dbt Cloud to transform raw data into structured, high-quality datasets that meet business requirements. Ensure data consistency and integrity across various datasets and implement data quality checks. Data Warehousing: Manage and optimize our Redshift / Snowflake data warehouses, ensuring it meets performance, storage, and security requirements. Implement best practices for data warehouse management, including partitioning, clustering, and indexing. Collaboration and Communication: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions that meet their needs. Communicate complex technical concepts to non-technical stakeholders in a clear and concise manner. Continuous Improvement: Stay updated with the latest developments in data engineering and analytics tools, and evaluate their potential to enhance our data infrastructure. Identify and implement opportunities for process improvements, automation, and optimization within the data pipeline. Requirements & Skills: Education and Experience: Bachelor's degree in Computer Science, Information Systems, Data Science, or a related field. 3-5 years of experience in data engineering or analytics engineering roles. Experience in AWS and DevOps is a plus. Technical Skills: Proficiency with modern ETL tools such as Airbyte and Fivetran. Must have experience with dbt for data modeling and transformation. Extensive experience working with Snowflake or similar cloud data warehouses. Solid understanding of SQL and experience writing complex queries for data extraction and manipulation. Familiarity with Python or other programming languages used for data engineering tasks. Analytical Skills: Strong problem-solving skills and the ability to troubleshoot data-related issues. Ability to understand business requirements and translate them into technical specifications. Soft Skills: Excellent communication and collaboration skills. Strong organizational skills and the ability to manage multiple projects simultaneously. Detail-oriented with a focus on data quality and accuracy. We are an equal-opportunity employer that values diversity at all levels. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity, disability, or veteran status.
Posted 2 months ago
5 - 7 years
7 - 9 Lacs
Bengaluru
Work from Office
Snowflake Data Engineer (Only immediate joiners 0-7 days) Key Responsibilities: Develop and implement efficient data pipelines and ETL processes to migrate and manage client, investment and accounting data in Snowflake. Work closely with the fund teams to understand data structures and business requirements, ensuring data accuracy and quality. Monitor and troubleshoot data pipelines, ensuring high availability and reliability of data systems. Optimize Snowflake database performance by designing scalable and cost-effective solutions. Design snowflake data model to effectively handle business needs. Work closely with AI Engineer and build data pipelines where necessary to support AI/ML projects. Skills Required: 5+ years of experience in IT working on Data projects with 3+ years of experience with Snowflake. Proficiency in Snowflake Data Cloud, including schema design, data partitioning, and query optimization. Strong SQL and Python skills , hands on experience working with python libraries such as pyspark, pandas, beautiful soup. Experience with ETL/ELT tools like Fivetran, Apache spark, dbt. Experience with RESTful APIs Familiarity workload automation and job scheduling tool such as Control M or Apache airflow. Familiar with data governance frameworks. Familiarity with Azure cloud.
Posted 2 months ago
5 - 8 years
15 - 30 Lacs
Chennai
Hybrid
Job Title: Data Engineer Designation: Manager Location: Chennai ( Can bare the relocation cost if you are relocating ) Experience Level: 5-8 Years Job Summary: We are seeking an experienced Manager - Data Engineer to join our dynamic team. In this role, you will be responsible for designing, implementing, and maintaining data infrastructure on Azure with extensive focus on Azure Databricks. You will work hand in hand with our analytics team to support data-driven decision making across different external clients in a variety of industries. SCOPE OF WORK Design, build, and maintain scalable data pipelines using Azure Data Factory (ADF), Fivetran and other Azure services. Administer, monitor, and troubleshoot SQL Server databases, ensuring high performance and availability. Develop and optimize SQL queries and stored procedures to support data transformation and retrieval. Implement and maintain data storage solutions in Azure, including Azure Databricks, Azure SQL Database, Azure Blob Storage, and Data Lakes. Collaborate with business analysts, clients and stakeholders to deliver insightful reports and dashboards using Power BI. Develop scripts to automate data processing tasks using languages such as Python, PowerShell, or similar. Ensure data security and compliance with industry standards and organizational policies. Stay updated with the latest technologies and trends in Azure cloud services and data engineering. Desired experience in healthcare data analytics, including familiarity with healthcare data models such as Encounter based models, or Claims focused models or Manufacturing data analytics or Utility Analytics IDEAL CANDIDATE PROFILE Bachelors degree in Computer Science, Engineering, Information Technology, or related field. At least 5-8 years of experience in data engineering with a strong focus on Microsoft Azure and Azure Databricks. Proven expertise in SQL Server database administration and development. Experience in building and optimizing data pipelines, architectures, and data sets on Azure. Experience with dbt and Fivetran Familiarity with Azure AI and LLM’s including Azure OpenAI Proficiency in Power BI for creating reports and dashboards. Strong scripting skills in Python, PowerShell, or other relevant languages. Familiarity with other Azure data services (e.g., Azure Synapse Analytics, Azure Blob..etc). Knowledge of data modeling, ETL processes, and data warehousing concepts. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to collaborate effectively with various teams and understand business requirements. Certifications in Azure Data Engineering or related fields. Experience with machine learning and data science projects (huge plus). Knowledge of additional BI tools and data integration platforms Thanks Aukshaya
Posted 2 months ago
5 - 10 years
10 - 20 Lacs
Chennai, Pune, Noida
Work from Office
Interested candidates can share resumes at deepali.rawat@rsystems.com Must Have: SQL Dbt Python Data Quality & Data modelling Good to Have Snowflake db, snowpipe, fivetran Resource should be expert in dbt and SQL, should be able to develop and maintain dbt model, understand data flow, perform data quality, testing of data using dbt etc.
Posted 2 months ago
4 - 9 years
0 Lacs
Mysore, Bengaluru, Hyderabad
Hybrid
Open & Direct Walk-in Drive event | Hexaware technologies SNOWFLAKE & SNOWPARK Data Engineer/Architect in Bangalore, Karnataka on 29th March [Saturday] 2025 - Snowflake/ Snowpark/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as a Data Engineer/Architect. We are hosting an Open Walk-in Drive in Bangalore, Karnataka on 29th March [Saturday] 2025 , and we believe your skills in Snowflake/ SNOWPARK / Python/ SQL & Pyspark align perfectly with what we are seeking. Experience Level: 4 years to 12 years Details of the Walk-in Drive: Date: 29th March [Saturday] 2025 Experience: 5 years to 15 years Time: 9.30 AM to 4PM Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Venue: Hexaware Technologies Ltd, Shanti Niketan, 11th Floor, Crescent - 2 Prestige, Whitefield Main Rd, Mahadevapura, Bengaluru, Karnataka 560048 Work Location: Open (Hyderabad/Bangalore / Pune/ Mumbai/ Noida/ Dehradun/ Chennai/ Coimbatore) Key Skills and Experience: As a Data Engineer, we are looking for candidates who possess expertise in the following: SNOWFLAKE Python Fivetran SNOWPARK & SNOWPIPE SQL Pyspark/Spark DWH Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: 4 - 15 years of Total IT experience on any ETL/Snowflake cloud tool. Min 3 years of experience in Snowflake Min 3 year of experience in query and processing data using python. Strong SQL with experience in using Analytical functions, Materialized views, and Stored Procedures Experience in Data loading features of Snowflake like Stages, Streams, Tasks, and SNOWPIPE. Working knowledge on Processing Semi-Structured data What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********
Posted 2 months ago
7 - 12 years
25 - 30 Lacs
Bengaluru
Remote
8+ years of experience in data engineering or a related field. Strong expertise in Snow flake and Azure Data Factory (ADF)/ Fivetran for data integration and orchestration.
Posted 2 months ago
4 - 9 years
25 - 30 Lacs
Chennai, Delhi NCR, Bengaluru
Work from Office
We're seeking an experienced Data Engineer to join our team on a contract basis. The ideal candidate will design, develop, and maintain data pipelines and architectures using Fivetran, Airflow, AWS, and other technologies. Responsibilities: Design, build, and maintain data pipelines using Fivetran, Airflow, and AWS Glue Develop and optimize data warehousing solutions using Amazon Redshift Implement data transformation and loading processes using AWS Athena and SQL Ensure data quality, security, and compliance Collaborate with cross-functional teams to integrate data solutions Troubleshoot and optimize data pipeline performance Implement data governance and monitoring using AWS services Requirements: 7-10 years of experience in data engineering Strong expertise in: Fivetran Airflow DB2 AWS (Glue, Athena, Redshift, Lambda) Python SQL Experience with data warehousing, ETL, and data pipeline design Strong problem-solving and analytical skills Excellent communication and collaboration skills Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Tecovas is looking for an AnalyticsEngineer to joinour growing and dynamic Data Team. This position will play an integral role in democratizing data access and use across all departments at Tecovas. Reporting to the Director of Data, you will be helping to build out the companys Data pipelines, Data Warehouse, and other Data products and play a key role in ensuring Tecovas has a best in class data practice. This candidate is strongly encouraged to work from our HQ office in Austin, TX with the ability to work remotely on other days. What youll do: Develop and maintain data models using dbt ensuring a single source of truth Data Warehouse Coordinate cross functionally to ensure business logic and metrics are accurately captured and aligned Collaborate with Data Science, Analytics, Core Systems and the rest of the Tech team to support advanced data projects Advance data monitoring, security, and compliance efforts to align with modern best practices Improve data infrastructure using software engineering best practices; data testing, observability, orchestration Improve internal tech documentation and business facing documentation / data dictionary Develop and support Data Science and Advanced Analytics pipelines with creative and unique analytics engineering solutions Experience were looking for: Bachelor's degree in computer science, engineering, or a related field 5+ years of experience as a data engineer, analytics engineer, or similar role Expertise with dbt Expertise with modern Data Engineering best practices including CDC, observability, quality testing, and performance and cost optimization Strong experience with Python, SQL, Git Experience with Fivetran, Stitch, or other ETL/ELT tools Familiarity with cloud-based platforms like BigQuery, Airflow, or other tools (GCP preferred, but equivalent experience is welcome). Excellent interpersonal and communication skills What you bring to the table: You are highly organized and a self-starter. You feel confident working in a fast-paced environment. You are able to quickly learn new systems and implement new procedures. You can easily collaborate with cross-functional partners. You have a positive attitude and are motivated by a challenge.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2