Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Developer - Java and React/Angular, SQL, API Mastercard is a global technology company behind the world’s fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities. As a SSE in Data Platform and Engineering Services, you will have the opportunity to build high performance data pipelines to load into Mastercard Data Warehouse. Our Data Warehouse provides analytical capabilities to a number of business users who help different customers provide answers to their business problems through data. You will play a vital role within a rapidly growing organization, while working closely with experienced and driven engineers to solve challenging problems. Your primary responsibilities would include designing, developing, and maintaining software applications using Java and related technologies. In addition to your Java development skills, having expertise in React or Angular would be beneficial in building modern and dynamic user interfaces for web applications. This role requires strong skills in HTML, CSS, and JavaScript as well as experience working with libraries and frameworks like Angular or React. Other important skills for an SSE with full stack development experience may include: Knowledge of software design patterns and best practices Experience of working on Unix environment Proficiency in database technologies such as SQL and NoSQL Experience of working with Databases like Oracle, Netezza and have strong SQL knowledge Experience with RESTful web services and API design Experience in full-stack Java development, along with proficiency in Angular or React, would be a valuable asset to this team. Knowledge of Redis will be an added advantage Experience of working on Nifi will be an added advantage Experience of working with APIs will be an added advantage Experience of working in Agile teams Experience in Data Engineering and implementing multiple end-to-end DW projects in Big Data environment Strong analytical skills required for debugging production issues, providing root cause and implementing mitigation plan Strong communication skills - both verbal and written – and strong relationship, collaboration skills and organizational skills Ability to multi-task across multiple projects, interface with external / internal resources and provide technical leadership to junior team members Ability to be high-energy, detail-oriented, proactive and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement Flexibility to work as a member of a matrix based diverse and geographically distributed project teams Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-252141
Posted 2 months ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: We are seeking a seasoned C++ Architect to lead the design, implementation, and execution of testing strategies for designing robust test frameworks, defining automation strategies, and ensuring end-to-end system reliability across distributed, cloud-native, and embedded Linux systems. This role requires a deep understanding of Linux, The ideal candidate will be responsible for ensuring the quality, performance, security, and compliance of databases, datawarehouses. Role: C++ Architect Location: Pune , Nagpur , Kochi , Hyderabad Experience: 12+ yrs Job Type: Full Time Employment What You'll Do: Developing core database features using C++ Designing and implementing Cloud centric features for C++ codes , e.g. ingesting data from data lake, different data type support Writing connectors for database product Expertise You'll Bring: Around 12 years of IT experience. Development/Bug fixing on database product based on Linux and C/C++. Good hands-on knowledge on databases like Oracle/Netezza / Postgres / DB2, proficient with Linux skills Experience working in Linux, Shell scripting , Experience working in Agile C/C++, SQL, Database internals Redhat certified , Linux internals , Kernel programming Benefits : Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups , Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Desired Competencies (Technical/Behavioral Competency) Must-Have IBM DataStage / Informatica, Snowflake, SQL, Unix Good-to-Have DBT Responsibility of / Expectations from the Role 1 Analyze and build ETL data assets using IBM DataStage / Informatica and On-Prem Databases such as Netezza. Propose design and implement solutions to migrate such data assets to cloud data warehouses such as AWS S3, Snowflake and built ELT transformations using DBT and Python 2 Create and manage data pipelines supporting CI/CD. 3 Work with project and business analyst leads in order to develop and clarify in-depth technical requirements including logical and physical data modeling activities 4 Develop, test, enhance, debug and support data assets / applications for business units or supporting functions using IBM Infosphere Data Stage ETL tool suite both ETL and ELT approaches. These application program solutions may involve diverse development platforms, software, hardware, technologies and tools 5 Participates in the design, development and implementation of complex applications, often using IBM Infosphere Information Server (IIS) products like Data Stage, Quality Stage on a Linux Grid environment. Snowflake, Control-M/Scheduling tools.
Posted 2 months ago
35.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior Analyst Data Operations Job Level: P2 FCI(First Citizens India) India is a global delivery center for FCI, the financial partner of the innovation economy and parent of First Citizen India. FCI develops products and provides services in finance, banking operations, and technology, and employs more than 800 employees and will be rapidly expanding. Job location – Bangalore Be Part Of a Bank Like No Other. When you work with the world's most innovative companies, you know you're making a difference. Our clients are the game changers, leaders and investors who fuel the global innovation economy. They're the businesses behind the next medical breakthroughs. And the visionaries whose new technologies could transform the way people live and work. For 35 years, FCI Financial Group and its subsidiaries have helped innovative companies and their investors move ambitious ideas forward, fast. FCI Financial Group’s businesses, including Silicon Valley Bank/ First Citizen India, operate at the intersection of innovation and capital, and provide a comprehensive range of financial services including commercial banking, investment solutions, research and insights, funds management and private banking and wealth advisory. FCI helps high growth companies in the technology, life science and healthcare, private equity and venture capital, and premium wine industries navigate at every stage. In addition, the company focuses on encouraging positive relationships with firms within the private equity and venture capital community worldwide, many of which are also the firm's clients and may invest in the firm's corporate clients. Sr. Data Operations Analyst is responsible to perform SQL developer, Metadata management, Python developer in Data quality or data governance related functions. Should have 3-7 years of overall industry experience which primarily involves and providing analytical and technical skills necessary to innovate, build, and maintain well-managed data solutions and 4-6 years of experience in Banking and financial data domain especially in in data analysis capabilities to solve Data Quality and business problems under the Data Governance Framework. Key Roles & Responsibilities: Key data deliverables at Silicon Valley Bank/ First Citizen India: Duties include Skills and Qualifications: Data Governance background: Exposure to Data Governance, Client data remediation, Data Quality, DQ issue management, DQ control and exceptions remediation in BFSI domain and if any experience in the used cases like CECL, CCAR, FRY14 Schedules, 10K/10Q, Party data, LFI KDEs, FCO KDEs etc. Demonstrating knowledge of data governance, data quality management concepts and data quality tools (i.e. IMB IA, Informatica DQ, Collibra); Understanding of Agile development methodologies, software, data lineage, Data Profiling and data mapping, Experience in Data quality dimension analysis. Up-skilling: Capacity to upskill, constantly learn and independent thinking required. Python Developer: 3-7 years of strong experience in Python, Python developer with a strong foundation in core Python concepts, object-oriented programming, debugging, and version control, along with knowledge of data structures, algorithms, and potentially web frameworks or data science libraries Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. SQL Developer: Strong proficiency with SQL and its variation among popular databases, exposure to Netezza would be added advantage. Analyze existing SQL queries for performance improvements. Create complex functions and stored procedures Analyze queries, develop security protocols, and resolve problem. Design database architecture and create dashboards Write complex queries for applications and business intelligence reporting Exposure to Database design, structure and development Suggest new queries and increase efficiency of existing queries. Capable of troubleshooting common database issues Skilled at optimizing large, complicated SQL statements. Develop procedures and scripts for data migration. Knowledge of best practices when dealing with relational databases Capable of configuring popular database engines and orchestrating clusters as necessary Data Visualization & Dashboard Development: Preferred to have exposure to data visualization tools. Responsibilities: Python : Understand requirements, write strong, clean and reusable code that can be easily maintained and scaled. SQL: Help write and optimize in-application SQL statements Collaborate with cross-functional teams (e.g., developers, Business owners, project managers, business analysts) to gather requirements and implement solutions. Excellent communication skills with exceptional writing and verbal communication skills to interact with stakeholders Communicate technical information clearly and effectively. Prepare documentations and specifications Handle common database procedures such as upgrade, backup, recovery, migration, etc. Knowledge of SQL Query writing and database security best practices. Profile server resource usage, and optimize and tweak as necessary Collaborate with other team members and stakeholders Ensure performance, security, and availability of databases Analyze performance issues and implement solutions Required Education and Experience 5+ years of experience as a Python and SQL Developer with a strong portfolio of projects. In-depth understanding of the Python software development stacks, ecosystems, frameworks.
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, let’s talk. Your Role And Responsibilities India Software Labs is looking for enthusiastic and talented software developers to join us. Our product portfolio includes several competitive offerings in the market like WatsonX, Cloud Pak for Data / Integration, DB2, OpenPages, SPSS Analytics, Information Governance, Netezza to name few. Software Developers at IBM are the backbone of our strategic initiatives to design, code, test, and provide industry-leading solutions that make the world run today - planes and trains take off on time, bank transactions complete in the blink of an eye and the world remains safe because of the work our software developers do. Whether you are working on projects internally or for a client, software development is critical to the success of IBM and our clients worldwide. At IBM, you will use the latest software development tools, techniques and approaches and work with leading minds in the industry to build solutions you can be proud of. Design, develop, test, operate and maintain database features in our products and services and tools to provide a secure environment for the product to be used by customers in the cloud. Evaluate new technologies and processes that enhance our service capabilities. Documenting and sharing your experience, mentoring others Required Technical And Professional Expertise Strong software programming skills using languages like Java/C/C++/Go/Scala Strong problem determination and resolution skills Exposure to best practices in design, development, and testing of software Knowledge in any SQL databases (Db2, PostgreSQL, MySQL, Oracle, SQL Server etc) Understanding of with Virtualization and Containerization technologies Knowledge in Docker and Kubernetes frameworks Development familiarity with the usage of Cloud Services (IBM Cloud, Amazon Web Services, Microsoft Azure) Knowledge of Linux/UNIX Operating Systems. Preferred Technical And Professional Experience Any Shell scripting languages Bash/Perl/Python/Ruby
Posted 2 months ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Vodafone Idea Limited is an Aditya Birla Group and Vodafone Group partnership. It is India’s leading telecom service provider. The Company provides pan India Voice and Data services across 2G, 3G and 4G platform. With the large spectrum portfolio to support the growing demand for data and voice, the company is committed to deliver delightful customer experiences and contribute towards creating a truly ‘Digital India’ by enabling millions of citizens to connect and build a better tomorrow. The Company is developing infrastructure to introduce newer and smarter technologies, making both retail and enterprise customers future ready with innovative offerings, conveniently accessible through an ecosystem of digital channels as well as extensive on-ground presence. The Company is listed on National Stock Exchange (NSE) and Bombay Stock Exchange (BSE) in India. We're proud to be an equal opportunity employer. At VIL, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected and empowered to reach their potential and contribute their best. VIL's goal is to build and maintain a workforce that is diverse in experience and background but uniform in reflecting our Values of Passion, Boldness, Trust, Speed and Digital. Consequently, our recruiting efforts are directed towards attracting and retaining best and brightest talents. Our endeavour is to be First Choice for prospective employees. VIL ensures equal employment opportunity without discrimination or harassment based on race, colour, religion, creed, age, sex, sex stereotype, gender, gender identity or expression, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy, veteran or military service status, genetic information, or any other characteristic protected by law. VIL is an equal opportunity employer committed to diversifying its workforce. Job Purpose :Study and find inferences from customer VOC from various data sources like CRM interactions, feedback at retail, TNPS dumps, feedback from retail, call listening. Regular sentiment analysis to identify the key pain areas and work towards fixing the issues. Key Accountability : Analysing customer VoC Extensive study of customer VOC from all touchpoints Identifying customer requirements from VoC Studying TNPS data Analysing customer feedback from TNPS dump Sentiment analysis of the feedback Making action plan basis TNPS findings Fixing audit observations Understanding audit observations from process auditors Prescribe process corrections to process owners wherever required Info gathering from various stakeholders Data Analytics Methodical analysis on customer VoC Analysing digital user trends and behaviours Predict M+1 trends basis various data study Develop analytical models for effective study of customer interactions Digitization Opportunity area finding for digitization of existing processes/journeys Basis customer VoC, top interactions at various touch-points. Identification of improvement opportunity areas in Digital Apps basis feedback from various touchpoints Core Competancies Awareness about CLC processes Telecom business process awareness Knowledge about telecom systems and applications. Excellent Analytical Skills Problem solving skills Experience of BI tools like SAS, MS Access, Netezza, Cognos. MS office Generative AI Key Perfomance Indicators Process change recommendations basis VOC & TNPS study Improvement in TNPS Reduction in complaint trends Process Audit indicator on process accuracy and compliance Increase in Digital Penetration Vodafone Idea Limited (formerly Idea Cellular Limited) An Aditya Birla Group & Vodafone partnership
Posted 2 months ago
3.0 years
6 - 27 Lacs
Delhi, India
On-site
About The Opportunity A fast-growing player in the Data & Analytics consulting sector, we build cloud-native data platforms and real-time reporting solutions for enterprises in retail, BFSI, and healthcare. Leveraging Google Cloud’s advanced analytics stack, we turn high-volume data into actionable insights that accelerate digital transformation and revenue growth. Role & Responsibilities Design, develop, and optimize BigQuery data warehouses for petabyte-scale analytics. Build ingestion pipelines using Dataflow, Pub/Sub, and Cloud Storage to ensure reliable, low-latency data availability. Implement ELT/ETL workflows in Python and SQL, applying best practices for partitioning, clustering, and cost control. Create and orchestrate DAGs in Cloud Composer/Airflow to automate data processing and quality checks. Collaborate with analysts and business stakeholders to model datasets, define SLAs, and deliver high-impact dashboards. Harden production environments with CI/CD, Terraform, monitoring, and automated testing for zero-defect releases. Skills & Qualifications Must-Have 3+ years building data pipelines on Google Cloud Platform. Expert hands-on experience with BigQuery optimisation and SQL performance tuning. Proficiency in Python scripting for data engineering tasks. Deep understanding of Dataflow or Apache Beam streaming and batch paradigms. Solid grasp of data warehousing principles, partitioning, and metadata management. Version control, containerisation, and CI/CD exposure (Git, Docker, Cloud Build). Preferred Terraform/IaC for infrastructure provisioning. Experience migrating on-prem warehouses (Teradata, Netezza) to BigQuery. Knowledge of data governance, DLP, and security best practices in GCP. Benefits & Culture Highlights Industry-leading GCP certifications paid and supported. Product-grade engineering culture with peer mentorship and hackathons. On-site, collaboration-rich workplace designed for learning and innovation. Skills: apache beam,ci/cd,sql,python,git,cloud storage,dataflow,docker,bigquery,airflow,cloud build,terraform,data warehousing,gcp data engineer (bigquery)
Posted 2 months ago
3.0 years
6 - 27 Lacs
Chennai, Tamil Nadu, India
On-site
About The Opportunity A fast-growing player in the Data & Analytics consulting sector, we build cloud-native data platforms and real-time reporting solutions for enterprises in retail, BFSI, and healthcare. Leveraging Google Cloud’s advanced analytics stack, we turn high-volume data into actionable insights that accelerate digital transformation and revenue growth. Role & Responsibilities Design, develop, and optimize BigQuery data warehouses for petabyte-scale analytics. Build ingestion pipelines using Dataflow, Pub/Sub, and Cloud Storage to ensure reliable, low-latency data availability. Implement ELT/ETL workflows in Python and SQL, applying best practices for partitioning, clustering, and cost control. Create and orchestrate DAGs in Cloud Composer/Airflow to automate data processing and quality checks. Collaborate with analysts and business stakeholders to model datasets, define SLAs, and deliver high-impact dashboards. Harden production environments with CI/CD, Terraform, monitoring, and automated testing for zero-defect releases. Skills & Qualifications Must-Have 3+ years building data pipelines on Google Cloud Platform. Expert hands-on experience with BigQuery optimisation and SQL performance tuning. Proficiency in Python scripting for data engineering tasks. Deep understanding of Dataflow or Apache Beam streaming and batch paradigms. Solid grasp of data warehousing principles, partitioning, and metadata management. Version control, containerisation, and CI/CD exposure (Git, Docker, Cloud Build). Preferred Terraform/IaC for infrastructure provisioning. Experience migrating on-prem warehouses (Teradata, Netezza) to BigQuery. Knowledge of data governance, DLP, and security best practices in GCP. Benefits & Culture Highlights Industry-leading GCP certifications paid and supported. Product-grade engineering culture with peer mentorship and hackathons. On-site, collaboration-rich workplace designed for learning and innovation. Skills: apache beam,ci/cd,sql,python,git,cloud storage,dataflow,docker,bigquery,airflow,cloud build,terraform,data warehousing,gcp data engineer (bigquery)
Posted 2 months ago
3.0 years
6 - 27 Lacs
Hyderabad, Telangana, India
On-site
About The Opportunity A fast-growing player in the Data & Analytics consulting sector, we build cloud-native data platforms and real-time reporting solutions for enterprises in retail, BFSI, and healthcare. Leveraging Google Cloud’s advanced analytics stack, we turn high-volume data into actionable insights that accelerate digital transformation and revenue growth. Role & Responsibilities Design, develop, and optimize BigQuery data warehouses for petabyte-scale analytics. Build ingestion pipelines using Dataflow, Pub/Sub, and Cloud Storage to ensure reliable, low-latency data availability. Implement ELT/ETL workflows in Python and SQL, applying best practices for partitioning, clustering, and cost control. Create and orchestrate DAGs in Cloud Composer/Airflow to automate data processing and quality checks. Collaborate with analysts and business stakeholders to model datasets, define SLAs, and deliver high-impact dashboards. Harden production environments with CI/CD, Terraform, monitoring, and automated testing for zero-defect releases. Skills & Qualifications Must-Have 3+ years building data pipelines on Google Cloud Platform. Expert hands-on experience with BigQuery optimisation and SQL performance tuning. Proficiency in Python scripting for data engineering tasks. Deep understanding of Dataflow or Apache Beam streaming and batch paradigms. Solid grasp of data warehousing principles, partitioning, and metadata management. Version control, containerisation, and CI/CD exposure (Git, Docker, Cloud Build). Preferred Terraform/IaC for infrastructure provisioning. Experience migrating on-prem warehouses (Teradata, Netezza) to BigQuery. Knowledge of data governance, DLP, and security best practices in GCP. Benefits & Culture Highlights Industry-leading GCP certifications paid and supported. Product-grade engineering culture with peer mentorship and hackathons. On-site, collaboration-rich workplace designed for learning and innovation. Skills: apache beam,ci/cd,sql,python,git,cloud storage,dataflow,docker,bigquery,airflow,cloud build,terraform,data warehousing,gcp data engineer (bigquery)
Posted 2 months ago
8.0 - 10.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Urgent Opening for Solution Architect- Data Warehouse-Bangalore Posted On 04th Jul 2019 12:25 PM Location Bangalore Role / Position Solution Architect- Data Warehouse Experience (required) 8 Plus years Description 8-10 years experience in consulting or IT experience supporting Enterprise Data Warehouses & Business Intelligence environments, including experience with data warehouse architecture & design, ETL design/development, and Analytics. Responsiblefor defining the data strategy andfor ensuring that the programs and project align to that strategy. Provides thought leadership in following areas: -Data Access, Data Integration, Data Visualization, Data Modeling, Data Quality and Metadata management -Analytics, Data Discovery, Use Statistical methods, Database Design and Implementation Expertise in Database Appliance, RDBMS, Teradata,Netezza Hands-on experience with data architecting, data mining, large-scale data modeling, and business requirements gathering/analysis. Experience in ETL and Data Migration Tools. Direct experience in implementing enterprise data management processes, procedures, and decision support. Responsiblefor defining the data strategy andfor ensuring that the programs and project align to that strategy. Strong understanding of relational data structures, theories, principles, and practices. Strong familiarity with metadata management and associated processes. Hands-on knowledge of enterprise repository tools, data modeling tools, data mapping tools, and data profiling tools. Demonstrated expertise with repository creation, and data and information system life cycle methodologies. Experience with business requirements analysis, entity relationship planning, database design, reporting structures, and so on. Ability to manage data and metadata migration. Experience with data processing flowcharting techniques. Hands on Experience in Big Data Technologies(5 years)-Hadoop, MapReduce, MongoDB, and Integration with the Legacy environmentswould be preferred . Experience with Spark using Scala or Python is a big plus Experience in Cloud Technologies(primarily in AWS, Azure) and integration with on premise existing Data warehouse technologies. Have good knowledge on S3, Redshift, Blob Storage, Presto DB etc. Attitude to learn and adopt emerging technologies. Send Resumes to girish.expertiz@gmail.com -->Upload Resume
Posted 2 months ago
3.0 years
6 - 27 Lacs
Pune, Maharashtra, India
On-site
About The Opportunity A fast-growing player in the Data & Analytics consulting sector, we build cloud-native data platforms and real-time reporting solutions for enterprises in retail, BFSI, and healthcare. Leveraging Google Cloud’s advanced analytics stack, we turn high-volume data into actionable insights that accelerate digital transformation and revenue growth. Role & Responsibilities Design, develop, and optimize BigQuery data warehouses for petabyte-scale analytics. Build ingestion pipelines using Dataflow, Pub/Sub, and Cloud Storage to ensure reliable, low-latency data availability. Implement ELT/ETL workflows in Python and SQL, applying best practices for partitioning, clustering, and cost control. Create and orchestrate DAGs in Cloud Composer/Airflow to automate data processing and quality checks. Collaborate with analysts and business stakeholders to model datasets, define SLAs, and deliver high-impact dashboards. Harden production environments with CI/CD, Terraform, monitoring, and automated testing for zero-defect releases. Skills & Qualifications Must-Have 3+ years building data pipelines on Google Cloud Platform. Expert hands-on experience with BigQuery optimisation and SQL performance tuning. Proficiency in Python scripting for data engineering tasks. Deep understanding of Dataflow or Apache Beam streaming and batch paradigms. Solid grasp of data warehousing principles, partitioning, and metadata management. Version control, containerisation, and CI/CD exposure (Git, Docker, Cloud Build). Preferred Terraform/IaC for infrastructure provisioning. Experience migrating on-prem warehouses (Teradata, Netezza) to BigQuery. Knowledge of data governance, DLP, and security best practices in GCP. Benefits & Culture Highlights Industry-leading GCP certifications paid and supported. Product-grade engineering culture with peer mentorship and hackathons. On-site, collaboration-rich workplace designed for learning and innovation. Skills: apache beam,ci/cd,sql,python,git,cloud storage,dataflow,docker,bigquery,airflow,cloud build,terraform,data warehousing,gcp data engineer (bigquery)
Posted 2 months ago
5.0 - 10.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design, develop, and maintain Qlik View applications and dashboards. Collaborate with business stakeholders to gather requirements and translate them into technical specifications. Perform data analysis and create data models to support business intelligence initiatives. Optimize Qlik View applications for performance and scalability. Provide technical support and troubleshooting for Qlik View applications. Ensure data accuracy and integrity in all Qlik View applications. Integrate Snowflake with Qlik View to enhance data processing and analytics capabilities. Stay updated with the latest Qlik View features and best practices. Conduct training sessions for end-users to maximize the utilization of Qlik View applications. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience between 2-5 years as a Qlik View Developer. Strong knowledge of Qlik View architecture, data modeling, and scripting. Proficiency in SQL and database management. Knowledge of Snowflake and its integration with Qlik View. Excellent analytical and problem-solving skills. Ability to work independently and as part of a team. Strong communication and interpersonal skills.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Develops ETL solutions using Informatica PowerCentre.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Chennai
Work from Office
Develop and manage data integration workflows using IBM InfoSphere DataStage. You will design, implement, and optimize ETL processes to ensure efficient data processing. Expertise in DataStage, ETL tools, and database management is required for this role.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
Develop and manage ETL processes using Informatica, ensuring smooth data extraction, transformation, and loading across multiple systems. Optimize data workflows to ensure high-quality data management.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Hyderabad
Work from Office
Design and develop ETL processes using IBM DataStage. Focus on data integration, transformation, and loading, ensuring efficient data pipelines.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Chennai
Work from Office
Designs and develops ETL pipelines using IBM InfoSphere DataStage. Handles data integration and transformation tasks.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Design and implement business intelligence solutions using MicroStrategy. Build dashboards, reports, and analytics tools that provide actionable insights to help organizations make informed business decisions.
Posted 2 months ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Design and implement data integration solutions using IBM InfoSphere DataStage. Develop ETL jobs, write PL/SQL scripts, and use Unix Shell Scripting for text processing to manage large datasets efficiently.
Posted 2 months ago
5.0 years
0 Lacs
Andhra Pradesh, India
On-site
JD Key Responsibilities Lead the end-to-end migration of legacy data warehouses (e.g., Teradata, Oracle, SQL Server, Netezza, Redshift) to Snowflake. Assess current data architecture and define migration strategy, roadmap, and timelines. Develop ELT/ETL pipelines using tools such as dbt, Apache Airflow, Matillion, Talend, Informatica, etc. Optimize Snowflake configurations, including clustering, caching, and resource management for performance and cost efficiency. Implement security best practices, including role-based access, masking, and data encryption. Collaborate with data engineering, analytics, and business teams to ensure accurate and efficient data transfer. Create and maintain technical documentation, including migration plans, test scripts, and rollback procedures. Support validation, testing, and go-live activities. Required Skills & Experience 5+ years in data engineering or data platform roles, with at least 2+ years in Snowflake migration projects. Hands-on experience in migrating large datasets from legacy data warehouses to Snowflake. Proficient in SQL, Python, and Snowflake scripting (SnowSQL, stored procedures, UDFs). Experience with data migration tools and frameworks (e.g., AWS SCT, Azure Data Factory, Fivetran, etc.). Strong knowledge of cloud platforms (AWS, Azure, or GCP). Familiarity with DevOps practices, CI/CD for data pipelines, and version control (Git). Excellent problem-solving and communication skills. Preferred Qualifications Snowflake certification(s) SnowPro Core or Advanced Architect. Experience with real-time data ingestion (e.g., Kafka, Kinesis, Pub/Sub). Background in data governance, data quality, and compliance (GDPR, HIPAA). Prior experience in Agile/Scrum delivery environments
Posted 2 months ago
6.0 - 11.0 years
2 - 6 Lacs
Hyderabad
Work from Office
5+ years experience inTeradata. Good inTeradata, should know BTEQ scripts, FastLoad, MLoad Experience Informatica Power Center experience Good in Writing SQL queries using Joins, Essential to have DataIKU tool proficient knowledge Proficient in Python Scripting Read logs. Can Script, Can automate script, can build Server
Posted 2 months ago
10.0 - 14.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Must Haves o Strong experience with SQL Development/NZSQL including Stored Procedures o Must have strong experience with advanced SQL development and SQL optimization . o Must have used external table, NZLOAD for file loading and unloading o Experience on Materialized views, CBTs o Worked in AWS Redshift development or Netezza for at least 2-3 years o Strong on Unix/Linux Shell Scripting o Ability to interpret data models (3NF or Kimbal) and code SQL accordingly o Must have used DevOps Jenkins, Bitbucket/Git/Sourcetree, Automated Test Scripts using Unix etc. o Must have strong analytic and problem-solving skills. o Must have implemented end to end BI DWH application. Good to Have o Good to have an understanding of Control M , IBM CDC, EC2 etc. o Good to have an understanding of AWS S3 , AWS DMS services. o Good to have an understanding of reporting tools like Tableau or Cognos o Insurance Domain Knowledge would be an added advantage. o Good to have experience in Agile ways of working
Posted 2 months ago
5.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Where Data Does More. Join the Snowflake team. The Technical Instructor for the Snowflake Customer Education and Training Team will be responsible for creating and delivering compelling education contents and training sets that make complex concepts come alive in instructor-led classroom venues. The senior instructor will be seen as a subject matter expert and leader in transferring knowledge of Snowflake to customers, partners and internals and in accelerating their technical on-boarding journey. This role will also be responsible for the cross-training efforts, program management and help strategically ramp multiple resources within our external stakeholders. This role is a unique opportunity to contribute in a meaningful way to high value and high impact delivery at a very exciting time for the company. Snowflake is an innovative, high-growth, customer-focused company in a large and growing market. If you are an energetic, self-managed professional with experience teaching data courses to customers and possess excellent presentation and communication skills, we’d love to hear from you. AS A TECHNICAL INSTRUCTOR AT SNOWFLAKE, YOU WILL: Teach a breadth of technical courses to onboard customers and partners to Snowflake, the data warehouse built for the Cloud Cross-train a breadth of technical courses to qualified individuals and resources The scope of course concepts may include foundational and advanced courses in the discipline which includes Snowflake data warehousing concepts, novel SQL capabilities, data consumption and connectivity interfaces, data integration and ingestion capabilities, database security features, database performance topics, Cloud ecosystem topics and more Apply database and data warehousing industry/domain/technology expertise and experience during training sessions to help customers and partners ease their organizations into the Snowflake data warehouse from prior database environments Deliver contents and cross train on delivery best practices using a variety of presentation formats including engaging lectures, live demonstration, and technical labs Work with customers and partners that are investing in the train the trainer program to certify their selected trainers ensuring they are well prepared and qualified to deliver the course at their organization Strong eye for design, making complex training concepts come alive in a blended educational delivery model Work with the education content developers to help prioritize, create, integrate, and publish training materials and hands-on exercises to Snowflake end users; drive continuous improvement of training performance Work with additional Snowflake subject-matter-experts in creating new education materials and updates to keep pace with Snowflake product updates OUR IDEAL TECHNICAL INSTRUCTOR WILL HAVE: Strong data warehouse and data-serving platform background Recent experience with using SQL including potentially in complex workloads 5-10 years of experience in technical content training development and delivery Strong desire and ability to teach and train Prior experience with other databases (e.g. Oracle, IBM Netezza, Teradata,…) Excellent written and verbal communication skills Innovative and assertive, with the ability to pick up new technologies Presence: enthusiastic and high energy, but also poised, confident and extremely professional Track record of delivering results in a dynamic start-up environment Experience working cross functionally, ideally with solution architects, technical writers, and support Strong sense of ownership and high attention to detail Candidates with degrees from fields such as Computer Science or Management Information Systems Comfortable with travel up to 75% of the time BONUS POINTS FOR EXPERIENCE WITH THE FOLLOWING: Experience with creating and delivering training programs to mass audiences Experience with other databases (e.g. Teradata, Netezza, Oracle, Redshift,…) Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop, HBase,…) Familiarity and experience with common BI and data exploration tools (e.g. Microstrategy, Business Objects, Tableau,…) Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS, Microsoft Azure,…) Experience with ETL pipelines tools Experience using AWS and Microsoft services Participated in Train the Trainer programs Proven success at enterprise software startups Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com
Posted 2 months ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking Delivery Manager With DWH Location: Chennai, Noida & Bangalore Required Skills: 12+ Years of experience in managing delivery of Data Warehouse Projects (Development & Modernization/Migration). Strong Delivery background with experience in managing large complex Data Warehouse engagements. Good to have experience on Snowflake, Matillion, DBT, Netezza/DataStage and Oracle. Healthcare Payer Industry experience Extensive experience in Program/Project Management, Iterative, Waterfall and Agile Methodologies. Ability to track and manage complex program budgets Experience in managing the delivery of complex programs to meet the needs and the required timelines set for the defined programs. Communicate program review results to various stakeholders. Experience in building the team, providing guidance, and education as needed to ensure the success of priority programs and promote cross-training within the department. Experience in developing and managing an integrated program plans that incorporate both technical and business deliverables. Verify that critical decision gates are well defined, communicated and monitored for executive approval throughout the program. Verify that work supports the corporate strategic direction. Review resulting vendor proposals and estimates to ensure they satisfy both our functional requirements and technology strategies. Project management methodologies, processes, and tools. Knowledge of Project Development Life Cycle Establish and maintain strong working relationships with various stakeholders including team members, IT resources, resources in other areas of the business and upper management Ability to track and manage complex program budgets Strong business acumen and political savvy Ability to collaborate while dealing with complex situations Ability to think creatively and to drive innovation Ability to motivate, lead and inspire a diverse group to a common goal/solution with multiple stakeholders Ability to convert business strategy into action oriented objectives and measurable results Strong negotiating, influencing, and consensus-building skills Ability to mentor, coach and provide guidance to others Responsibilities: Responsible for the end to end delivery of the Application Development and Support services for the client Coordinate with Enterprise Program Management Office to execute programs following defined standards and governance structure to ensure alignment to the approved project development life cycle (PDLC). Interface regularly with key senior business leaders to enable a smooth transition from strategy development to program identification and execution. Facilitate meetings with task groups or functional areas as required for EPMO supported initiatives and/or to resolve issues. Proactively engage other members of the organization with specific subject knowledge to resolve issues or provide assistance. Lead post implementation review of major initiatives to provide lessons learned and continuous improvement. Develop accurate and timely summary report for executive management that provide consolidated, clear, and concise assessments of strategic initiatives implementation status. Collaborate with business owners to develop divisional business plans that support the overall strategic direction. Supports budget allocation process through ongoing financial tracking reports. Develop & maintain service plans considering the customer requirements. Track and monitor to ensure the adherence to SLA/KPIs Identify opportunities for improvement to service delivery process. Address service delivery issues/escalations/complaints. First point of escalation for customer escalations Oversee shift management for various tracks. Responsible for publishing production support reports & metrics Best Regards, Sanjay Kumar
Posted 2 months ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities • ARCHITECTURE AND DESIGN FOR DATA ENGINEERING AND MACHINE LEARNING PROJECTS Establishing architecture and target design for data engineering and machine learning projects. • REQUIREMENT ANALYSIS, PLANNING, EFFORT AND RESOURCE NEEDS ESTIMATION Current inventory analysis, review and formalize requirements, project planning and execution plan. • ADVISORY SERVICES AND BEST PRACTICES Troubleshooting, Performance Tuning, Cost Optimization, Operational Runbooks and Mentoring • LARGE MIGRATIONS Assist customers with large migrations to Databricks from Hadoop ecosystems, Data Warehouses (Teradata, DataStage, Netezza, Ab Initio), ETL engines (Informatica), SAS, SQL, DW, Cloud-based Data platforms like Redshift, Snowflake, EMR, etc • DESIGN, BUILD AND OPTIMIZE DATA PIPELINES The Databricks implementation will be best in class, with flexibility for future iterations. • PRODUCTION READINESS Assisting with production readiness for customers, including exception handling, production cutover, capture analysis, alert scheduling and monitoring • MACHINE LEARNING (ML) – MODEL REVIEW, TUNING, ML OPERATIONS AND OPTIMIZATION Build and review ML models, ML best practices, model lifecycle, ML frameworks and deploying of models in production. Must Have: ▪ Pre- Sales experience is a must. ▪ Hands on experience with distributed computing framework like DataBricks, Spark Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL) ▪ Willing to work with product teams to best optimize product features/functions. ▪ Experience on Batch workloads and real time streaming with high volume data frequency. ▪ Performance optimization on Spark workloads ▪ Environment setup, user management, Authentication and cluster management on Databricks ▪ Professional curiosity and the ability to enable yourself in new technologies and tasks. ▪ Good understanding of SQL and a good grasp of relational and analytical database management theory and practice. Key Skills: • Python, SQL and Pyspark • Big Data Ecosystem (Hadoop, Hive, Sqoop, HDFS, Hbase) • Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks • Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) • AWS (Lambda,AWS Glue, S3, Redshift) • Data Modelling, ETL Methodology
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |