Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, ETL, and related tools. With a minimum of 5 years of experience in Data Engineering, you have expertise in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This role offers you an exciting opportunity to work with cutting-edge technologies in a collaborative environment and contribute to building scalable, high-performance data solutions. Your responsibilities will include: - Developing and maintaining data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes across various data sources. - Writing complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. - Implementing advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT, and designing high-performance data architectures. - Collaborating with business stakeholders to understand data needs and translating business requirements into technical solutions. - Performing root cause analysis on data-related issues, ensuring effective resolution, and maintaining high data quality standards. - Working closely with cross-functional teams to integrate data solutions and creating clear documentation for data processes and models. Your qualifications should include: - Expertise in Snowflake for data warehousing and ELT processes. - Strong proficiency in SQL for relational databases and writing complex queries. - Experience with Informatica PowerCenter for data integration and ETL development. - Proficiency in using Power BI for data visualization and business intelligence reporting. - Familiarity with Sigma Computing, Tableau, Oracle, DBT, and cloud services like Azure, AWS, or GCP. - Experience with workflow management tools such as Airflow, Azkaban, or Luigi. - Proficiency in Python for data processing (knowledge of other languages like Java, Scala is a plus). Education required for this role is a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. This position will be based in Bangalore, Chennai, Kolkata, or Pune. If you meet the above requirements and are passionate about data engineering and analytics, this is an excellent opportunity to leverage your skills and contribute to impactful data solutions.,
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
maharashtra
On-site
As the Technical Lead for Data Engineering at Assent, you will collaborate with other team members to identify opportunities and assess the feasibility of solutions. Your role will involve providing technical guidance, influencing decision-making, and aligning data engineering initiatives with business goals. You will drive the technical strategy, team execution, and process improvements to build resilient and scalable data systems. Mentoring a growing team and building robust, scalable data infrastructure will be essential aspects of your responsibilities. Your key requirements and responsibilities will include driving the technical execution of data engineering projects, working closely with Architecture members to design and implement scalable data pipelines, and providing technical guidance to ensure best practices in data engineering. You will collaborate cross-functionally with various teams to define and execute data initiatives, plan and prioritize work with the team manager, and stay updated with emerging technologies to drive their adoption. To be successful in this role, you should have 10+ years of experience in data engineering or related fields, expertise in cloud data platforms such as AWS, proficiency in modern data technologies like Spark, Airflow, and Snowflake, and a deep understanding of distributed systems and data pipeline design. Strong programming skills in languages like Python, SQL, or Scala, experience with infrastructure as code and DevOps best practices, and the ability to influence technical direction and advocate for best practices are also necessary. Strong communication and leadership skills, a learning mindset, and experience in security, compliance, and governance related to data systems will be advantageous. At Assent, we value your talent, energy, and passion, and offer various benefits to support your well-being, financial health, and personal growth. Our commitment to diversity, equity, and inclusion ensures that all team members are included, valued, and provided with equal opportunities for success. If you require any assistance or accommodation during the interview process, please feel free to contact us at talent@assent.com.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead Data Engineer at Mastercard, you will be a key player in the Mastercard Services Technology team, responsible for driving the mission to unlock the potential of data assets by innovating, managing big data assets, ensuring accessibility of data, and enforcing standards and principles in the Big Data space. Your role will involve designing and building scalable, cloud-native data platforms using PySpark, Python, and modern data engineering practices. You will mentor and guide other engineers, foster a culture of curiosity and continuous improvement, and create robust ETL/ELT pipelines that integrate with various systems. Your responsibilities will include decomposing complex problems into scalable components aligned with platform goals, championing best practices in data engineering, collaborating across teams, supporting data governance and quality efforts, and optimizing cloud infrastructure components related to data engineering workflows. You will actively participate in architectural discussions, iteration planning, and feature sizing meetings while adhering to Agile processes. To excel in this role, you should have at least 5 years of hands-on experience in data engineering with strong PySpark and Python skills. You must possess solid experience in designing and implementing data models, pipelines, and batch/stream processing systems. Additionally, a strong foundation in data modeling, database design, and performance optimization is required. Experience working with cloud platforms like AWS, Azure, or GCP and knowledge of modern data architectures and data lifecycle management are essential. Furthermore, familiarity with CI/CD practices, version control, and automated testing is crucial. You should demonstrate the ability to mentor junior engineers effectively, possess excellent communication and collaboration skills, and hold a Bachelor's degree in computer science, Engineering, or a related field. Comfort with Agile/Scrum development environments, curiosity, adaptability, problem-solving skills, and a drive for continuous improvement are key traits for success in this role. Experience with integrating heterogeneous systems, building resilient data pipelines across cloud environments, orchestration tools, data governance practices, containerization, infrastructure automation, and exposure to machine learning data pipelines or MLOps will be advantageous. Holding a Master's degree, relevant certifications, or contributions to open-source/data engineering communities will be a bonus.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
As a Data Governance and Management Developer at Assent, you will play a crucial role in ensuring the quality and reliability of critical data across systems and domains. Your responsibilities will include defining and implementing data quality standards, developing monitoring pipelines to detect data issues, conducting data profiling assessments, and designing data quality dashboards. You will collaborate with cross-functional teams to resolve data anomalies and drive continuous improvement in data quality. Key Requirements & Responsibilities: - Define and implement data quality rules, validation checks, and metrics for critical business domains. - Develop Data Quality (DQ) monitoring pipelines and alerts to proactively detect data issues. - Conduct regular data profiling and quality assessments to identify gaps, inconsistencies, duplicates, and anomalies. - Design and maintain data quality dashboards and reports for visibility into trends and issues. - Utilize generative AI to automate workflows, enhance data quality, and support responsible prompt usage. - Collaborate with data owners, stewards, and technical teams to resolve data quality issues. - Develop and document standard operating procedures (SOPs) for issue management and escalation workflows. - Support root cause analysis (RCA) for recurring or high-impact data quality problems. - Define and monitor key data quality KPIs and drive continuous improvement through insights and analysis. - Evaluate and recommend data quality tools that scale with the enterprise. - Provide recommendations for enhancing data processes, governance practices, and quality standards. - Ensure compliance with internal data governance policies, privacy standards, and audit requirements. - Adhere to corporate security policies and procedures set by Assent. Qualifications: - 2-5 years of experience in a data quality, data analyst, or similar role. - Degree in Computer Science, Information Systems, Data Science, or related field. - Strong understanding of data quality principles. - Proficiency in SQL, Git Hub, R, Python, SQL Server, and BI tools like Tableau, Power BI, or Sigma. - Experience with cloud data platforms (e.g., Snowflake, BigQuery) and data transformation tools (e.g., dbt). - Exposure to Graph databases and GenAI tools. - Ability to interpret dashboards and communicate data quality findings effectively. - Understanding of data governance frameworks and regulatory considerations. - Strong problem-solving skills, attention to detail, and familiarity with agile work environments. - Excellent verbal and written communication skills. Join Assent and be part of a dynamic team that values wellness, financial benefits, lifelong learning, and diversity, equity, and inclusion. Make a difference in supply chain sustainability and contribute to meaningful work that impacts the world. Contact talent@assent.com for assistance or accommodation during the interview process.,
Posted 3 weeks ago
4.0 - 9.0 years
15 - 27 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Location: Kolkata, Hyderabad, Bangalore Exp 4 to 17 years Band 4B, 4C, 4D Skill set -Snowflake,AWS/ Azure, Python,ETL Job Description: Experience in IT industry Working experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL/ELT Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be added an advantage Roles and Responsibilities: Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake, developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL, Bulk copy, Snowpipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight, Steamlit Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system. Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python/Pyspark.integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python or Pyspark. Should have some experience on Snowflake RBAC and data security. Should have good experience in implementing CDC or SCD type-2. Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analysis, designing, development, and deployment. Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Snowflake Data Engineer. Skill Metrix: Snowflake, Python/PySpark/ DBT, AWS/Azure, ETL concepts, & Data Warehousing concepts Data Modeling , Design patterns
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Engineer specializing in Data Warehousing and Business Intelligence, you will play a critical role in architecting scalable data warehouses and optimizing ETL pipelines to support analytics and reporting needs. Your expertise in SQL query optimization, database management, and data governance will ensure data accuracy, consistency, and completeness across structured and semi-structured datasets. You will collaborate with cross-functional teams to propose and implement data solutions, leveraging your strong SQL skills and hands-on experience with MySQL, PostgreSQL, and Spark. Your proficiency in tools like Apache Airflow for workflow orchestration and BI platforms such as Power BI, Tableau, and Apache Superset will enable you to create insightful dashboards and reports that drive informed decision-making. A key aspect of your role will involve implementing data governance best practices, defining data standards, access controls, and policies to maintain a well-governed data ecosystem. Your ability to troubleshoot data challenges independently and identify opportunities for system improvements will be essential in ensuring the efficiency and effectiveness of data operations. If you have 5-7 years of experience in data engineering and BI, along with a strong understanding of data modeling techniques, this position at Zenda offers you the opportunity to make a significant impact by designing and developing innovative data solutions. Experience with dbt for data transformations would be a bonus, showcasing your expertise in enhancing data transformation processes.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a DevOps Engineer for our team based in Europe, you will be responsible for leveraging your skills in Informatica Powercenter and PowerExchange, Datavault modeling, and Snowflake. With over 7 years of experience, you will bring valuable expertise in ETL development, specifically with Informatica Powercenter and Datavault modeling. Your proficiency in DevOps practices and SAFe methodologies will be essential in ensuring the smooth operation of our systems. Moreover, your hands-on experience with Snowflake and DBT will be advantageous in optimizing our data processes. You will have the opportunity to work within a scrum team environment, where your contributions will be vital. If you have previous experience as a Scrum Master or aspire to take on such a role, we encourage you to apply. If you are a detail-oriented professional with a passion for driving efficiency and innovation in a dynamic environment, we would love to hear from you. Please send your profile to contact@squalas.com to be considered for this exciting opportunity.,
Posted 3 weeks ago
7.0 - 9.0 years
20 - 25 Lacs
Hyderabad, Bengaluru
Work from Office
Immediate Joiners Only Role & responsibilities 6+ years of experience with Snowflake (Snowpipe, Streams, Tasks) Strong proficiency in SQL for high-performance data transformations Hands-on experience building ELT pipelines using cloud-native tools Proficiency in dbt for data modeling and workflow automation Python skills (Pandas, PySpark, SQLAlchemy) for data processing Experience with orchestration tools like Airflow or Prefect
Posted 3 weeks ago
7.0 - 12.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Lead Data Engineer - What You Will Do: As a PR3 Lead Data Engineer, you will be instrumental in driving our data strategy, ensuring data quality, and leading the technical execution of a small, impactful team. Your responsibilities will include: Team Leadership: Establish the strategic vision for the evolution of our data products and our technology solutions, then provide technical leadership and guidance for a small team of Data Engineers in executing the roadmap. Champion and enforce best practices for data quality, governance, and architecture within your team's work. Embody a product mindset over the teams data. Oversee the team’s use of Agile methodologies (e.g., Scrum, Kanban), ensuring smooth and predictable delivery, and overtly focusing on continuous improvement. Data Expertise & Domain Knowledge: Actively seek out, propose, and implement cutting-edge approaches to data transfer, transformation, analytics, and data warehousing to drive innovation. Design and implement scalable, robust, and high-quality ETL processes to support growing business demand for information, delivering data as a reliable service that directly influences decision making. Develop a profound understanding and "feel" for the business meaning, lineage, and context of each data field within our domain. Communication & Stakeholder Partnership: Collaborate with other engineering teams and business partners, proactively managing dependencies and holding them accountable for their contributions to ensure successful project delivery. Actively engage with data consumers to achieve deep understanding of their specific data usage, pain points, and current gaps, then plan initiatives to implement improvements collaboratively. Clearly articulate project goals, technical strategies, progress, challenges, and business value to both technical and non-technical audiences. Produce clear, concise, and comprehensive documentation. Your Qualifications: At Vista, we value the experience and potential that individual team members add to our culture. Please don’t hesitate to apply even if you don’t meet the exact qualifications, we look forward to learning more about you! Bachelor's or Master's degree in computer science, data engineering, or a related field . 10+ years of professional experience, with at least 6 years of hands-on Data Engineering, specifically in e-commerce or direct to consumer, and 4 years of team leadership Demonstrated experience in leading a team of data engineers, providing technical guidance, and coordinating project execution Stakeholder management experience and excellent communication skills Strong knowledge of SQL and data warehousing concepts is a must Strong knowledge of Data Modeling concepts and hands-on experience designing complex multi-dimension data models Strong hands-on experience in designing and managing scalable ETL pipelines in cloud environments with large volume datasets (both structured/unstructured data) Proficiency with cloud services in AWS (Preferred), including S3, EMR, RDS, Step Functions, Fargate, Glue etc. Critical hands-on experience with cloud-based data platforms (Snowflake strongly preferred) Data Visualization experience with reporting and data tools (preferably Looker with LookML skills) Coding mastery in at least one modern programming language: Python (strongly preferred), Java, Golang, PySpark, etc. Strong knowledge in production standards such as versioning, CI/CD, data quality, documentation, automation, etc. Problem solving and multi-tasking ability in a fast-paced, globally distributed environment Nice To Have: Experience with API development on enterprise platforms, with GraphQL APIs being a clear plus Hands-on experience designing DBT data pipelines Knowledge of finance, accounting, supply chain, logistics, operations, procurement data is a plus Experience managing work in Jira and writing documentation in Confluence Proficiency in AWS account management, including IAM, infrastructure, and monitoring for health, security and cost optimization Experience with Gen AI/ML tools for enhancing data pipelines or automating analysis. Why You'll Love Working Here There is a lot to love about working at Vista. We are an award winning Remote-First company. We’re an inclusive community. We’re growing (which means you can too). And to help orient us all in the same direction, we have our Vista Behaviors which exemplify the behavioral attributes that make us a culturally strong and high-performing team. Our Team: Enterprise Business Solutions Vistas Enterprise Business Solutions (EBS) domain is working to make our company one of the most data-driven organizations to support Finance, Supply Chain, and HR functions. The cross-functional team includes product owners, analysts, technologists, data engineers and more – all focused on providing Vista with cutting-edge tools and data we can use to deliver jaw-dropping customer value. EBS team members are empowered to learn new skills, communicate openly, and be active problem-solvers. Join our EBS Domain as a Lead Data Engineer! This Lead level within the organization will be responsible for the work of a small team of data engineers, focusing not only on implementations but also operations and support. The Lead Data Engineer will implement best practices, data standards, and reporting tools. The role will oversee and manage the work of other data engineers as well as being an individual contributor. This role has a lot of opportunity to impact general ETL development and implementation of new solutions. We will look to the Lead Data Engineer to modernize data technology solutions in EBS, including the opportunity to work on modern warehousing, finance, and HR datasets and integration technologies. This role will require an in-depth understanding of cloud data integration tools and cloud data warehousing, with a strong and pronounced ability to lead and execute initiatives to tangible results.
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
You should have a minimum of 6 years of experience in the technical field and possess the following skills: Python, Spark SQL, PySpark, Apache Airflow, DBT, Snowflake, CI/CD, Git, GitHub, and AWS. Your role will involve understanding the existing code base in AWS services and SQL, and converting it to a tech stack primarily using Airflow, Iceberg, Python, and SQL. Your responsibilities will include designing and building data models to support business requirements, developing and maintaining data ingestion and processing systems, implementing data storage solutions, ensuring data consistency and accuracy through validation and cleansing techniques, and collaborating with cross-functional teams to address data-related issues. Proficiency in Python, experience with big data Spark, orchestration experience with Airflow, and AWS knowledge are essential for this role. You should also have experience in security and governance practices such as role-based access control (RBAC) and data lineage tools, as well as knowledge of database management systems like MySQL. Strong problem-solving and analytical skills, along with excellent communication and collaboration abilities, are key attributes for this position. At NucleusTeq, we foster a positive and supportive culture that encourages our associates to perform at their best every day. We value and celebrate individual uniqueness, offering flexibility for making daily choices that contribute to overall well-being. Our well-being programs and continuous efforts to enhance our culture aim to create an environment where our people can thrive, lead healthy lives, and excel in their roles.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You are familiar with AWS and Azure Cloud. You have extensive knowledge of Snowflake, with SnowPro Core certification being a must-have. In at least one project, you have utilized DBT to deploy models in production. Furthermore, you have experience in configuring and deploying Airflow, integrating various operators in Airflow, especially DBT & Snowflake. Your capabilities also include designing build, release pipelines, and a solid understanding of the Azure DevOps Ecosystem. Proficiency in Python, particularly PySpark, allows you to write metadata-driven programs. You are well-versed in Data Vault (Raw, Business) and concepts such as Point In Time and Semantic Layer. In ambiguous situations, you demonstrate resilience and possess the ability to clearly articulate problems in a business-friendly manner. Documenting processes, managing artifacts, and evolving them over time are practices you believe in and adhere to diligently. Required Skills: data vault, dbt, python, snowflake, data, Azure Cloud, AWS, articulate, PySpark, concepts, Azure, Airflow, artifacts, Azure DevOps.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Are you ready to power the world's connections If you don't think you meet all of the criteria below but are still interested in the job, please apply. Nobody checks every box - we're looking for candidates that are particularly strong in a few areas, and have some interest and capabilities in others. We are seeking a dedicated CX Business Ops Analyst with a proven track record in Professional Services or Customer Success data analysis, ideally with experience in enterprise software environments. As a member of the Customer Experience (CX) Operations team, you will support the CX (Professional Services, Customer Success, Support, Education) organizations growth and optimization, while sitting in the broader Revenue Operations team. In this role, you will have the opportunity to interface with everyone in the CX team as you build our internal analytics to help guide CX team members to deliver maximum value to Kong customers. You will support both strategic and tactical initiatives and will function as the primary CX Ops point of contact for all data, reporting, and analytics questions on a day-to-day basis. What you'll be doing: - Work across SQL data warehouses (Snowflake and Bigquery), Tableau, Google Sheets, and Google Slides depending on the nature of the analysis and reporting. We use ETL and reverse ETL technologies to update our CRM and data warehouses and organize data transformations with DBT. - Create, maintain, analyze, and present reports, metrics, and dashboards across all levels and roles of the CX team. - Build and maintain slide decks for key CX Cadences (QBRs, All-Hands, Board Decks,.) - Build and maintain the CX data dictionary and reporting suite for all roles and levels of CX. - Analyze, model, and forecast Professional Services KPIs for internal and external resources. - Own the user adoption and documentation of CX analytics. - Manage CX team inquiries and ad-hoc requests across data, reporting, and analytics. - Help improve customer data points and run projects as necessary to ensure data integrity. What you'll bring: - A passion for data, user experience, and automation. - Strong customer service attitude, and ability to work independently and in a fast-paced environment. - A team player who works well in a collaborative environment. - Proficiency with SQL for data analysis and modeling. Experience with DBT is a plus. - Advanced Gsheet and Gslides skills; Tableau reporting expertise, Basic Salesforce reporting skills. - Reliability and attention to detail. - Excellent written and communication skills. Ability to concisely articulate complex issues and solutions to different audiences. - A team player who works well in a collaborative environment. - 3-5 years of relevant business experience. About Kong: Kong is THE cloud-native API platform with the fastest, most adopted API gateway in the world (over 300m downloads!). As the innovation leader of cloud API technologies, Kong is on a mission to enable companies around the world to become "API-first" and securely accelerate AI adoption. Kong helps organizations globally - from startups to Fortune 500 enterprises - unleash developer productivity, build securely and accelerate to market. 83% of web traffic today is API calls! APIs are the connective tissue of the cloud and the underlying technology that allows software to talk and interact with one another. Therefore, we believe that APIs act as the nervous system of the cloud. Our audacious mission is to build the nervous system that will safely and reliably connect all of humankind! For more information about Kong, please visit konghq.com or follow @thekonginc on Twitter.,
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Looking for a Data tester with DBT (Data built tool) experience for Core conversion project. Offshore ETL Tester with knowledge on Power BI, DBT Labs. Very good in SQL concepts and Query writing Experience of writing simple procedures using T-SQL Hands on experience ETL Testing Knowledge on Power BI, DBT Labs. Data loads/intakes/extracts and incremental loads testing Good Manual Testing / UI testing Ability to design test cases from requirement and Test planning Very good communication and coordination skills Knowledge of Banking domain / Agency management is added advantage
Posted 3 weeks ago
9.0 - 14.0 years
30 - 37 Lacs
Hyderabad
Hybrid
SQL & Database Management : Deep knowledge of relational databases (SQL / PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake . ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools. Imp :-The candidate should have a strong data engineering background with hands-on experience in handling large volumes of data, data pipelines, and cloud-based data systems the same it should reflect in profile
Posted 3 weeks ago
5.0 - 8.0 years
8 - 15 Lacs
Hyderabad, Telangana, India
On-site
Role & responsibilities Design, implement, and manage cloud-based solutions on AWS and Snowflake. Work with stakeholders to gather requirements and design solutions that meet their needs. Develop and execute test plans for new solutions Oversee and design the information architecture for the data warehouse, including all information structures such as staging area, data warehouse, data marts, and operational data stores. Ability to Optimize Snowflake configurations and data pipelines to improve performance, scalability, and overall efficiency. Deep understanding of Data Warehousing, Enterprise Architectures, Dimensional Modeling, Star & Snowflake schema design, Reference DW Architectures, ETL Architect, ETL (Extract, Transform, Load), Data Analysis, Data Conversion, Transformation, Database Design, Data Warehouse Optimization, Data Mart Development, and Enterprise Data Warehouse Maintenance and Support Significant experience working as a Data Architect with depth in data integration and data architecture for Enterprise Data Warehouse implementations (conceptual, logical, physical & dimensional models) Maintain Documentation: Develop and maintain detailed documentation for data solutions and processes. Provide Training: Offer training and leadership to share expertise and best practices with the team Collaborate with the team and provide leadership to the data engineering team, ensuring that data solutions are developed according to best practices Preferred candidate profile Snowflake, DBT, Data Architecture Design experience in Data Warehouse. Good to have Informatica or any ETL Knowledge or Hands-On Experience Good to have Databricks understanding 5+ years of IT experience with 3+ years of Data Architecture experience in Data Warehouse, 4+ years in Snowflake
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
punjab
On-site
About Us We are a global climate technologies company engineered for sustainability. We create sustainable and efficient residential, commercial and industrial spaces through HVACR technologies. We protect temperature-sensitive goods throughout the cold chain. And we bring comfort to people globally. Best-in-class engineering, design and manufacturing combined with category-leading brands in compression, controls, software and monitoring solutions result in next-generation climate technology that is built for the needs of the world ahead. Whether you are a professional looking for a career change, an undergraduate student exploring your first opportunity, or recent graduate with an advanced degree, we have opportunities that will allow you to innovate, be challenged and make an impact. Join our team and start your journey today! JOB DESCRIPTION/ RESPONSIBILITIES: We are looking for Data Engineers:- Candidate must have a minimum of (10) years of experience in a Data Engineer role, including the following tools/technologies: Experience with relational (SQL) databases. Experience with data warehouses like Oracle, SQL & Snowflake. Technical expertise in data modeling, data mining and segmentation techniques. Experience with building new and troubleshooting existing data pipelines using tools like Pentaho Data Integration (PDI),ADF (Azure Data Factory),Snowpipe,Fivetran,DBT Experience with batch and real time data ingestion and processing frameworks. Experience with languages like Python, Java, etc. Knowledge of additional cloud-based analytics solutions, along with Kafka, Spark and Scala is a plus. * Develops code and solutions that transfers/transforms data across various systems * Maintains deep technical knowledge of various tools in the data warehouse, data hub, * and analytical tools. * Ensures data is transformed and stored in efficient methods for retrieval and use. * Maintains data systems to ensure optimal performance * Develops a deep understanding of underlying business systems involved with analytical systems. * Follows standard software development lifecycle, code control, code standards and * process standards. * Maintains and develops technical knowledge by self-training of current toolsets and * computing environments, participates in educational opportunities, maintains professional * networks, and participates in professional organizations related to their tech skills Systems Analysis * Works with key stakeholders to understand business needs and capture functional and * technical requirements. * Offers ideas that simplify the design and complexity of solutions delivered. * Effectively communicates any expectations required of stakeholders or other resources * during solution delivery. * Develops and executes test plans to ensure successful rollout of solution, including accuracy * and quality of data. Service Management * Effectively communicates to leaders and stakeholders any obstacles that occur during * solution delivery. * Defines and manages promised delivery dates. * Pro-actively research, analyzes, and predicts operational issues, informing leadership * where appropriate. * Offers viable options to solve unexpected/unknown issues that occur during * solution development and delivery. * EDUCATION / JOB-RELATED TECHNICAL SKILLS: * Bachelors Degree in Computer Science/Information Technology or equivalent * Ability to effectively communicate with others at all levels of the Company both verbally and in * writing. Demonstrates a courteous, tactful, and professional approach with employees and * others. * Ability to work in a large, global corporate structure Our Commitment to Our People Across the globe, we are united by a singular Purpose: Sustainability is no small ambition. Thats why everything we do is geared toward a sustainable futurefor our generation and all those to come. Through groundbreaking innovations, HVACR technology and cold chain solutions, we are reducing carbon emissions and improving energy efficiency in spaces of all sizes, from residential to commercial to industrial. Our employees are our greatest strength. We believe that our culture of passion, openness, and collaboration empowers us to work toward the same goal - to make the world a better place. We invest in the end-to-end development of our people, beginning at onboarding and through senior leadership, so they can thrive personally and professionally. Flexible and competitive benefits plans offer the right options to meet your individual/family needs . We provide employees with flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. Together, we have the opportunity and the power to continue to revolutionize the technology behind air conditioning, heating and refrigeration, and cultivate a better future. Learn more about us and how you can join our team! Our Commitment to Diversity, Equity & Inclusion At Copeland, we believe having a diverse, equitable and inclusive environment is critical to our success. We are committed to creating a culture where every employee feels welcomed, heard, respected, and valued for their experiences, ideas, perspectives and expertise . Ultimately, our diverse and inclusive culture is the key to driving industry-leading innovation, better serving our customers and making a positive impact in the communities where we live. Equal Opportunity Employer ,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
The position at Iris Software in Noida, UP, India is looking for a candidate with 3-4 years of working experience. The ideal candidate must have a strong background in Python Django and should also possess expertise in SQL, Snowflake, and DBT. As a part of the Iris Software team, you will be working on complex, mission-critical applications using cutting-edge technologies such as Python, Django, SQL, Snowflake, and more. The company's vision is to be the most trusted technology partner for its clients and create an environment where professionals can realize their full potential. Iris Software values its employees and offers a supportive work culture where individuals are encouraged to grow both professionally and personally. The company's Employee Value Proposition focuses on enabling employees to excel in their careers, be challenged by inspiring work, and be part of a culture that recognizes and nurtures talent. Joining Iris Software comes with a range of benefits aimed at supporting the financial, health, and well-being needs of its employees. From competitive salaries to comprehensive health insurance and flexible work arrangements, the company is committed to providing a rewarding work environment that fosters personal and professional growth. If you are looking to be a part of one of India's Top 25 Best Workplaces in the IT industry and want to work with a rapidly growing IT services company, Iris Software could be the place for you to do your best work and thrive in an award-winning work culture.,
Posted 3 weeks ago
5.0 - 10.0 years
10 - 12 Lacs
Navi Mumbai
Work from Office
Hello Candidates , We are Hiring !! Job Position - Data Engineer Experience - 5+ years Location - NAVI MUMBAI ( Juinagar ) Work Mode - WFO Job Description We are looking for an experienced and results-driven Senior Data Engineer to join our Data Engineering team. In this role, you will design, develop, and maintain robust data pipelines and infrastructure that enable efficient data flow across our systems. As a senior contributor, you will also help define best practices, mentor junior team members, and contribute to the long-term vision of our data platform. You will work closely with cross-functional teams to deliver reliable, scalable, and high-performance data systems that support critical business intelligence and analytics initiatives. Responsibilities Design, build, and maintain scalable ETL/ELT pipelines to support analytics, Data Warehouse, and business operations. Collaborate with cross-functional teams to gather requirements and deliver high-quality data solutions. Develop and manage data models, data lakes, and data warehouse solutions in cloud environments (e.g., AWS, Azure, GCP). Monitor and optimize the performance of data pipelines and storage systems. Ensure data quality, integrity, and security across all platforms. Optimize and tune SQL queries and ETL jobs for performance and scalability. Collaborate with business analysts, data scientists, and stakeholders to understand requirements and deliver data solutions. Contribute to architectural decisions and development standards across the data engineering team. Participate in code reviews and provide guidance to junior developers. Leverage tools such as Airflow, Spark, Kafka, dbt, or Snowflake to build modern data infrastructure. Ensure data accuracy, completeness, and integrity across systems. Implement best practices in data governance, security, and compliance (e.g., GDPR, HIPAA). Mentor junior developers and participate in peer code reviews. Create and maintain detailed technical documentation. Required Qualifications Bachelors degree in Computer Science, Information Systems, or a related field; Masters degree is a plus. 5+ years of experience in data warehousing, ETL development, and data modeling. Strong hands-on experience with one or more databases: Snowflake, Redshift, SQL Server, Oracle, Postgres, Teradata, BigQuery. Proficiency in SQL and scripting languages (e.g., Python, Shell). Deep knowledge of data modeling techniques and ETL frameworks. Excellent communication, analytical thinking, and troubleshooting skills. Preferred Qualifications Experience with modern data stack tools like dbt, Fivetran, Stitch, Looker, Tableau, or Power BI. Knowledge of data lakes, lakehouses, and real-time data streaming (e.g., Kafka). Agile/Scrum project experience and version control using Git. NOTE - Candidates can share their Resume - shruti.a@talentsketchers.com
Posted 3 weeks ago
5.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
Sr. Data Analytics Engineer Power mission-critical decisions with governed insights Ajmera Infotech builds planet-scale software for NYSE-listed clients, driving decisions that can’t afford to fail. Our 120-engineer team specializes in highly regulated domains—HIPAA, FDA, SOC 2—and delivers production-grade systems that turn data into strategic advantage. Why You’ll Love It End-to-end impact — Build full-stack analytics from lakehouse pipelines to real-time dashboards. Fail-safe engineering — TDD, CI/CD, DAX optimization, Unity Catalog, cluster tuning. Modern stack — Databricks, PySpark, Delta Lake, Power BI, Airflow. Mentorship culture — Lead code reviews, share best practices, grow as a domain expert. Mission-critical context — Help enterprises migrate legacy analytics into cloud-native, governed platforms. Compliance-first mindset — Work in HIPAA-aligned environments where precision matters. Key Responsibilities Build scalable pipelines using SQL, PySpark, Delta Live Tables on Databricks. Orchestrate workflows with Databricks Workflows or Airflow; implement SLA-backed retries and alerting. Design dimensional models (star/snowflake) with Unity Catalog and Great Expectations validation. Deliver robust Power BI solutions —dashboards, semantic layers, paginated reports, DAX. Migrate legacy SSRS reports to Power BI with zero loss of logic or governance. Optimize compute and cost through cache tuning, partitioning, and capacity monitoring. Document everything —from pipeline logic to RLS rules—in Git-controlled formats. Collaborate cross-functionally to convert product analytics needs into resilient BI assets. Champion mentorship by reviewing notebooks, dashboards, and sharing platform standards. Must-Have Skills 5+ years in analytics engineering, with 3+ in production Databricks/Spark contexts. Advanced SQL (incl. windowing), expert PySpark , Delta Lake , Unity Catalog . Power BI mastery —DAX optimization, security rules, paginated reports. SSRS-to-Power BI migration experience (RDL logic replication). Strong Git, CI/CD familiarity, and cloud platform know-how (Azure/AWS). Communication skills to bridge technical and business audiences. Nice-to-Have Skills Databricks Data Engineer Associate cert. Streaming pipeline experience (Kafka, Structured Streaming). dbt , Great Expectations , or similar data quality frameworks. BI diversity—experience with Tableau, Looker, or similar platforms. Cost governance familiarity (Power BI Premium capacity, Databricks chargeback). Benefits & Call-to-Action Ajmera offers competitive compensation, flexible schedules, and a deeply technical culture where engineers lead the narrative. If you’re driven by reliable, audit-ready data products and want to own systems from raw ingestion to KPI dashboards— apply now and engineer insights that matter.
Posted 3 weeks ago
5.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Remote
Position Responsibilities: As a part of the Data Warehouse Team, implement technology improvements to Enterprise Data Warehouse environment. In this role, you will be a key contributor to the implementation of Snowflake with DBT, as part of our Migration from Oracle to a cloud-based data warehousing. Collaborate closely with cross-functional teams to design, develop, and optimize ELT processes within a cloud-based Data Warehouse environment. Develop and maintain Fivetran data pipelines to ensure smooth data extraction and loading from various source systems into Snowflake. Implement and enhance ETL programs using Informatica Power Center against the Data Warehouse and Adobe Campaign (Neolane) databases. Contribute to technical architectural planning, digital data modeling, process flow documentation and the design and development of innovative Digital business solutions. Create technical designs and mapping specifications. Work with both technical staff and business constituents to translate Digital business requirements into technical solutions. Estimate workload and participate in an Agile project team approach. Proven individual contributor and team player with high communication skills. Ability to lead, manage & validate workload for up to 2 offshore developers. Provide on-call support of the Data Warehouse nightly processing. Be an active participant in both technology and business initiatives. Position Requirements & Qualifications: At least 8 years of experience supporting Data Warehouse & Data related environments. Conduct efficient data integration with other-third party tools and snowflake. Hands on experience with snowflake development. Familiarity with cloud-based Data Warehousing solutions, particularly Snowflake. Advanced experience required in Informatica Power Center (5+ Years). Ability to code in Python, JavaScript. Knowledge of data governance practices and data security considerations in a cloud environment. Experience in working with Web services using Informatica for external vendor data integration. Experience in working with number of XML data sources & API Calls Solid experience in performance tuning ETL Jobs & Database queries Advanced Oracle & Snowflake database skills including packages, procedures, indexing and query tuning (5+ years). Solid understanding of Data Warehouse design theory including dimensional data modeling Working experience with cloud computing architecture. Experience in working with Azure DevOps, Jira, TFS (Team Foundation Server) or other similar Agile Project Management tool. Ability to thrive in change by having a fast, flexible, cooperative work style and ability to reprioritize at a moments notice. Bachelor's degree required. Notice Period - 0-15 days .
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Location: Pune (Hybrid) Experience: 5+ years Key Responsibilities: Data Pipeline Architecture: Build and optimize large-scale data ingestion pipelines from multiple sources. Scalability & Performance: Ensure low-latency, high-throughput data processing for real-time and batch workloads. Cloud Infrastructure: Design and implement cost-effective, scalable data storage solutions. Automation & Monitoring: Implement observability tools for pipeline health, error handling, and performance tracking. Security & Compliance: Ensure data encryption, access control, and regulatory compliance in the data platform. Ideal Candidate Profile: Strong experience in Snowflake, dbt, and AWS for large-scale data processing. Expertise in Python, Airflow, and Spark for orchestrating pipelines. Deep understanding of data architecture principles for real-time and batch workloads. Hands-on experience with Kafka, Kinesis, or similar streaming technologies. Ability to work on cloud cost optimizations and infrastructure-as-code (Terraform, CloudFormation).,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
As a Snowflake DBT Lead at Pyramid Consulting, you will be responsible for overseeing Snowflake data transformation and validation processes in Delhi, India. Your role will include ensuring efficient data handling, maintaining data quality, and collaborating closely with cross-functional teams. To excel in this role, you should have strong expertise in Snowflake, DBT, and SQL. Your experience in data transformation, modeling, and validation will be crucial for success. Proficiency in ETL processes and data warehousing is essential to meet the job requirements. Your excellent problem-solving and communication skills will enable you to effectively address challenges and work seamlessly with team members. As a candidate for this position, you should hold a Bachelor's degree in Computer Science or a related field. Your ability to lead and collaborate within a team environment will be key to delivering high-quality solutions and driving impactful results for our clients.,
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Data Engineer with 8-12 years of experience, you will be responsible for handling a variety of tasks for one of our clients in Hyderabad. This is a full-time position with an immediate start date. Your proficiency in Python, Spark, SQL, Snowflake, Airflow, AWS, and DBT will be essential for this role. In this role, you will be expected to work on a range of data engineering tasks using the specified skill set. Your expertise in these technologies will be crucial in developing efficient data pipelines, ensuring data quality, and optimizing data workflows. Furthermore, you will collaborate with cross-functional teams to understand data requirements, design and implement data solutions, and provide technical guidance on best practices. Your ability to communicate effectively and work well in a team setting will be key to your success in this role. If you are interested in this opportunity and possess the required skill set, please share your profile with us at srujanat@teizosoft.com. We look forward to potentially having you join our team in Hyderabad.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm focused on delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Databricks Developer - AWS. As a Databricks Developer in this role, you will be responsible for solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Stay updated on new and emerging technologies and explore their potential applications for service offerings and products. - Collaborate with architects and lead engineers to design solutions that meet functional and non-functional requirements. - Demonstrate knowledge of relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess excellent coding skills, particularly in Python or Scala, with a preference for Python. Qualifications: Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - Stay informed about new technologies and their potential applications. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Proficient in Python or Scala coding. - Experience in the Data Engineering domain. - Completed at least 2 end-to-end projects in Databricks. Additional qualifications: - Familiarity with Delta Lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation in enterprise environments. - Ability to create complex data pipelines. - Strong knowledge of Data structures & algorithms. - Proficiency in SQL and Spark-SQL. - Experience in performance optimization to enhance efficiency and reduce costs. - Worked on both Batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Experience with cloud platforms (Azure, AWS, GCP) and common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. - Skilled in writing unit and integration test cases. - Excellent communication skills and experience working in teams of 5 or more. - Positive attitude towards learning new skills and upskilling. - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience in CI/CD to build pipelines for Databricks jobs. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. This is a full-time position based in India-Gurugram. The job posting was on August 5, 2024, and the unposting date is set for October 4, 2024.,
Posted 3 weeks ago
10.0 - 20.0 years
20 - 30 Lacs
Pune
Remote
Role & responsibilities Minimum 10+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France