Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 8 years
4 - 8 Lacs
Maharashtra
Work from Office
Design, develop, and optimize data integration workflows using Apache NiFi. Work on mission critical projects, ensuring high availability, reliability, and performance of data pipelines. Integrate NiFi with cloud platforms (e.g., AWS, Azure, GCP) for scalable data processing and storage. Develop custom NiFi processors and extensions using Java. Implement real time data streaming solutions using Apache Kafka. Work with MongoDB for NoSQL data storage and retrieval. Use GoldenGate for real time data replication and integration. Troubleshoot and resolve complex issues related to NiFi workflows and data pipelines. Collaborate with cross functional teams to deliver robust, production ready solutions. Follow best practices in coding, testing, and deployment to ensure high quality deliverables. Mentor junior team members and provide technical leadership. Mandatory Skills and Qualifications: 5+ years of hands on experience in Apache NiFi for data integration and workflow automation. Senior level Java programming knowledge, including experience in developing custom NiFi processors and extensions. Strong knowledge of cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., S3, EC2, Lambda, Azure Data Lake, etc.). Proficiency in Linux environments, including shell scripting and system administration. Experience with Apache Kafka for real time data streaming and event driven architectures. Hands on experience with MongoDB for NoSQL data management. Familiarity with GoldenGate for real time data replication and integration. Experience in performance tuning and optimization of NiFi workflows. Solid understanding of data engineering concepts, including ETL/ELT, data lakes, and data warehouses. Ability to work independently and deliver results in a fast paced, high pressure environment. Excellent problem solving, debugging, and analytical skills. Good to Have Skills: Experience with containerization tools like Docker and Kubernetes. Knowledge of DevOps practices and CI/CD pipelines. Familiarity with big data technologies like Hadoop, Spark, or Kafka. Understanding of security best practices for data pipelines and cloud environments. Interview Focus Areas: Hands on NiFi Development: Practical assessment of NiFi workflow design and optimization. Java Programming: Senior level coding skills, including custom NiFi processor development. Cloud Integration: Understanding of how NiFi integrates with cloud platforms for data processing and storage. Kafka and MongoDB: Expertise in real time data streaming and NoSQL data management. GoldenGate: Knowledge of real time data replication and integration. Linux Proficiency: Ability to work in Linux environments and troubleshoot system level issues. Problem Solving: Analytical skills to resolve complex data integration challenges. Shift Requirements: Flexible shift hours with the shift ending by midday US time. Willingness to adapt to dynamic project needs and timelines.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Uttar Pradesh
Work from Office
About The Role ::We are seeking 10 years of experienced SnapLogic Tech Lead to join our dynamic team. The ideal candidate will have extensive expertise in SnapLogic integration and strong knowledge of SAP, Oracle JDBC, and Oracle JD Edwards systems. Working experience with Workday, database technologies, and cloud based solutions like Snowflake will be considered a significant advantage. This role requires a highly motivated individual with a passion for designing, developing, and maintaining efficient integration solutions across diverse enterprise applications. Key Responsibilities: Design, develop, and implement scalable and efficient integration solutions using SnapLogic. Collaborate with cross functional teams to understand business requirements and translate them into technical solutions. Create and maintain integration pipelines for SAP, Oracle JDBC, JD Edwards, and other enterprise applications. Troubleshoot and resolve integration issues, ensuring data accuracy and system reliability. Develop and optimize database queries using JDBC, SQL, and Snowflake. Work on integration workflows involving Workday and other HRMS systems. Ensure adherence to best practices in SnapLogic development, including error handling, logging, and performance tuning. Provide technical leadership and mentorship to junior team members. Prepare and maintain technical documentation, including solution designs, data mappings, and system configurations. Stay updated on the latest SnapLogic features, trends, and industry best practices. Required Skills and Qualifications: 5+ years of experience in SnapLogic development and integration. Strong knowledge of SAP, Oracle JDBC, and Oracle JD Edwards systems. Experience with Workday integration workflows. Proficiency in database technologies, including JDBC, SQL, and Snowflake. Solid understanding of ETL/ELT processes and data integration best practices. Strong problem solving and analytical skills with the ability to troubleshoot complex integration issues. Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Bachelors degree in Computer Science, Information Technology, or a related field. Preferred Skills: Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Knowledge of REST APIs and web services for application integration. Experience in CI/CD pipelines for SnapLogic deployments. Certifications in SnapLogic or related integration tools are a plus.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Tecovas is looking for an AnalyticsEngineer to joinour growing and dynamic Data Team. This position will play an integral role in democratizing data access and use across all departments at Tecovas. Reporting to the Director of Data, you will be helping to build out the companys Data pipelines, Data Warehouse, and other Data products and play a key role in ensuring Tecovas has a best in class data practice. This candidate is strongly encouraged to work from our HQ office in Austin, TX with the ability to work remotely on other days. What youll do: Develop and maintain data models using dbt ensuring a single source of truth Data Warehouse Coordinate cross functionally to ensure business logic and metrics are accurately captured and aligned Collaborate with Data Science, Analytics, Core Systems and the rest of the Tech team to support advanced data projects Advance data monitoring, security, and compliance efforts to align with modern best practices Improve data infrastructure using software engineering best practices; data testing, observability, orchestration Improve internal tech documentation and business facing documentation / data dictionary Develop and support Data Science and Advanced Analytics pipelines with creative and unique analytics engineering solutions Experience were looking for: Bachelor's degree in computer science, engineering, or a related field 5+ years of experience as a data engineer, analytics engineer, or similar role Expertise with dbt Expertise with modern Data Engineering best practices including CDC, observability, quality testing, and performance and cost optimization Strong experience with Python, SQL, Git Experience with Fivetran, Stitch, or other ETL/ELT tools Familiarity with cloud-based platforms like BigQuery, Airflow, or other tools (GCP preferred, but equivalent experience is welcome). Excellent interpersonal and communication skills What you bring to the table: You are highly organized and a self-starter. You feel confident working in a fast-paced environment. You are able to quickly learn new systems and implement new procedures. You can easily collaborate with cross-functional partners. You have a positive attitude and are motivated by a challenge.
Posted 3 months ago
8 - 12 years
25 - 35 Lacs
Hyderabad
Work from Office
Job Summary HighRadius is looking for a dynamic Java professional to join our Engineering Team. This role responsibilities include participation in Software Development activities, writing clean and efficient code for various applications and running tests to improve system functionality. Writes code that is easily maintainable, highly reliable and demonstrates knowledge of common programming best practices. Mentor junior members of the team in delivering sprint stories and tasks. Key Responsibilities Demonstrate excellent Leadership, hands on Technical, Design & Architecture skills and lead the team to arrive at optimal solutions for business challenges. Review requirements, specifications, and create technical design documents Estimate tasks and meet milestones and deadlines appropriately Provides technical guidance and support during the development phase of a project and ensure project delivery Follow best practices of the industry for delivering high-quality software in a timely manner and to the specification. Identify risks or opportunities associated with current or new technology use. Plan and execute PoCs as necessary. Strive for continuous improvement of Development Process & standards Good interpersonal communication and organizational skills to contribute as a leading member of global, distributed teams focused on delivering quality, high performant and scalable solutions Demonstrated ability to rapidly learn new and emerging technologies and vision of technology transformation developing a vision of their suitability Effectively communicate with team members, project managers, clients and other stakeholders as required. Skill & Experience Needed Experience in Payment domain is must have Bachelors Degree required Experience range : 8+ years Technology Stack : Core Java, Java 8, collections, Exception Handling, Hibernate, Spring, SQL Good to have - Ext.Js or any UI framework experience, Elastic search, Cloud architecture (AWS, GCP, Azure ) Knowledge of Design patterns, Jenkins, GIT, Grafana, ELT, JUNIT Deep understanding and experience of architecting and deployment of end to end scalable web application. Deep understanding of SCRUM/Agile process, project tracking, monitoring, risk management. What you get Competitive salary Fun-filled work culture (https://www.highradius.com/culture/) Equal employment opportunities Opportunity to build with a pre-IPO Global SaaS Centaur.
Posted 3 months ago
8 - 13 years
22 - 30 Lacs
Chennai, Hyderabad
Work from Office
Tech Lead/Architect Experience:- 8+ yrs Hyderabad/Chennai Email:- sneha.parashar@biitservices.com Expertise in Snowflake - ELT using Snowflake SQL, implementing complex stored Procedures and build data pipelines Any Big data tools – Hadoop, Spark.
Posted 3 months ago
6 - 8 years
8 - 10 Lacs
Chennai, Pune, Bengaluru
Work from Office
Overview: Key Responsibilities: Design and develop data pipelines and workflows to extract, transform, and load (ETL/ELT) data from various sources into target systems. Automate workflows to ensure efficiency, scalability, and error reduction in data integration processes. Optimize and manage data storage solutions, including data lakes and warehouses, for high performance and scalability. Ensure data quality by implementing processes to validate completeness, accuracy, and consistency of data. Ensure compliance with data governance policies and maintain data privacy and security standards. Collaborate with Data Scientists, Analytics Engineers, and stakeholders to understand business requirements and deliver high-quality data solutions. Implement monitoring and observability tools for data pipelines to ensure reliability and real- time issue detection. Stay abreast of the latest developments and advancements, including new and emerging technologies & best practices and new tools & software applications and how they could impact CDM Smith. Assist with the development of documentation, standards, best practices, and workflows for data technology hardware/software in use across the business. Perform other duties as required. Skills and Abilities: Experience with the Software Development Life Cycle (SDLC) and Agile Development methodologies. Strong expertise in Microsoft Azure cloud services, including Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Proficiency in building and optimizing data systems using modern frameworks like Apache Spark and Databricks. Expertise in data modeling and designing scalable ETL/ELT processes. Experience with real-time streaming solutions, such as Kafka or Azure IoT Hub. Hands-on experience with distributed computing tools, including Apache Spark, and Hadoop. Familiarity with CI/CD pipelines and DevOps practices for data solutions. Knowledge of monitoring tools and techniques for ensuring pipeline observability and reliability. Excellent problem-solving and critical thinking skills to identify and address technical challenges effectively. Excellent interpersonal and presentation skills to build strategic relationships with colleagues, stakeholders, and partners. Strong critical thinking skills to generate innovative solutions and improve business processes. Ability to effectively communicate complex technical concepts to both technical and non- technical audiences. Detail oriented with the ability to assist with executing highly complex or specialized projects. Minimum Qualifications: Bachelors degree. 6 years of related experience. Equivalent additional directly related experience will be considered in lieu of a degree. Preferred Qualifications:
Posted 3 months ago
10 - 15 years
30 - 35 Lacs
Chennai, Hyderabad
Hybrid
Who we are: LearningMate ( LearningMate.com ) is a leading technology multinational with domain expertise in teaching and learning solutions; leveraging digital, cloud, process automation, data, and strong learning design principles. It is part of the Straive group ( Straive.com ), that helps clients operationalize the Data Insights Knowledge AI value chain, extending across industries like Financial Services, Insurance, Healthcare & Life Sciences, Scientific Research, Information Providers, EdTech, and Logistics. This position is for our partner MGT ( www.mgt.us ), a $440M US-based national technology and advisory solutions leader serving state, local government, education (SLED) and targeted commercial clients across the U.S. Their specialized solutions solve the most critical issues that live at the top of clients leadership agendas. MGT partners to help clients build resilience, implement systematic change and strengthen their foundations, now and for the future. Shift Time Zone: Candidates should be flexible to work until midnight (India time) as needed Key Responsibilities: Design and develop efficient and scalable Snowflake data pipelines and data models Write complex SQL queries to extract, transform, and load data into Snowflake Optimize query performance and troubleshoot data quality issues Collaborate with data analysts and business users to understand data requirements and translate them into technical solutions Stay up to date with the latest Snowflake features and best practices Ensuring data security and governance by implementing security features like encryption, role-based access control, and data masking Collaborating with cross-functional teams to meet data analysis needs Minimum Qualifications, Skills, and Experience: Over all 10 + years of data warehousing and 3-5 years of Snowflake Strong SQL skills, including knowledge of advanced SQL functions and techniques Experience with Snowflake data warehousing platform Proficiency in ETL/ELT processes and tools (e.g. dbt, Rivery, Fivetran) Understanding of data modeling and data warehousing concepts Ability to work independently and as part of a team Strong problem-solving and analytical skills Preferred qualifications, skills and experience: Experience with Python or other scripting languages Knowledge of cloud platforms (e.g. AWS, Azure, GCP) Experience with data visualization tools (e.g. Tableau, Power BI) Snowflake certifications Why Join Us? You will benefit from a great culture and environment that contributes to your career growth. Benefits include hybrid work mode, 5-day workweek , and comprehensive insurance coverage, including Mediclaim, healthcare, and term insurance. How to Apply? Send your resume to: [amrita.dostider@learningmate.com] Application Deadline: (6th April 2025) Acknowledgement: We are an equal opportunity employer. We commit to celebrate diversity, equity, and inclusion in the workplace.
Posted 3 months ago
13 - 23 years
25 - 35 Lacs
Hyderabad
Work from Office
Position Overview : We are seeking a highly skilled and experienced - Snowflake Practice Lead- to drive our data strategy, architecture, and implementation using Snowflake. This leadership role requires a deep understanding of Snowflake's cloud data platform, data engineering best practices, and enterprise data management. The ideal candidate will be responsible for defining best practices, leading a team of Snowflake professionals, and driving successful Snowflake implementations for clients. Key Responsibilities : Leadership & Strategy : - Define and drive the Snowflake practice strategy, roadmap, and best practices. - Act as the primary subject matter expert (SME) for Snowflake architecture, implementation, and optimization. - Collaborate with stakeholders to understand business needs and align data strategies accordingly. Technical Expertise & Solutioning : - Design and implement scalable, high-performance data architectures using - Snowflake- . - Develop best practices for data ingestion, transformation, modeling, and security- within Snowflake. - Guide clients on Snowflake migrations, ensuring a seamless transition from legacy systems. - Optimize - query performance, storage utilization, and cost efficiency- in Snowflake environments. Team Leadership & Mentorship : - Lead and mentor a team of Snowflake developers, data engineers, and architects. - Provide technical guidance, conduct code reviews, and establish best practices for Snowflake development. - Train internal teams and clients on Snowflake capabilities, features, and emerging trends. Client & Project Management : - Engage with clients to understand business needs and design tailored Snowflake solutions. - Lead - end-to-end Snowflake implementation projects , ensuring quality and timely delivery. - Work closely with - data scientists, analysts, and business stakeholders- to maximize data utilization. Required Skills & Experience : - 10+ years of experience- in data engineering, data architecture, or cloud data platforms. - 5+ years of hands-on experience with Snowflake in large-scale enterprise environments. - Strong expertise in SQL, performance tuning, and cloud-based data solutions. - Experience with ETL/ELT processes, data pipelines, and data integration tools- (e.g., Talend, Matillion, dbt, Informatica). - Proficiency in cloud platforms such as AWS, Azure, or GCP, particularly their integration with Snowflake. - Knowledge of data security, governance, and compliance best practices . - Strong leadership, communication, and client-facing skills. - Experience in migrating from traditional data warehouses (Oracle, Teradata, SQL Server) to Snowflake. - Familiarity with Python, Spark, or other big data technologies is a plus. Preferred Qualifications : - Snowflake SnowPro Certification- (e.g., SnowPro Core, Advanced Architect, Data Engineer). - Experience in building data lakes, data marts, and real-time analytics solutions- . - Hands-on experience with DevOps, CI/CD pipelines, and Infrastructure as Code (IaC)- in Snowflake environments. Why Join Us? - Opportunity to lead cutting-edge Snowflake implementations- in a dynamic, fast-growing environment. - Work with top-tier clients across industries, solving complex data challenges. - Continuous learning and growth opportunities in cloud data technologies. - Competitive compensation, benefits, and a collaborative work culture.
Posted 3 months ago
9 - 14 years
17 - 22 Lacs
Chennai
Work from Office
Data warehousing methodologies such as Kimball, Inmon and Data Vault Oracle Analytics Cloud, Oracle Autonomous Data Warehouse and Snowflake. ETL/ELT tools Data Engineering Data Analysis Application Maintenance Hyperion Governance & Release Management
Posted 3 months ago
15 - 24 years
50 - 65 Lacs
Pune, Mumbai, Bengaluru
Work from Office
The Opportunity As a Director in Data Advisory , you are responsible for defining and executing data advisory engagements for clients. Your role focused on Building, selling and delivering Data based solutions and Invents data capabilities. As a data advisory practitioner, you should have a point of view and understanding of build vs. buy, performance considerations, hosting, Data Lake, Data Mesh, reporting & analytics. Additional Responsibilities Include: Demonstrate the ability to hold strategic CXO conversations , ability to sell Capgemini Invents data capabilities to internal and external customers. Build and nurture a high-performing data advisory team , fostering growth and skill development Lead the development and execution of data strategy roadmaps and architectural design for clients, ensuring alignment with business objectives Manage and provide technical leadership to large data programs or multiple programs implementation based on the requirement using agile technologies Drive revenue growth by identifying, proposing, and winning new data advisory opportunities across key accounts. Create data-based assets and enable asset-based selling Stay updated with the latest technologies to incorporate innovative tools and methodologies into architectural designs Our Ideal Candidate Bachelors/Masters Degree in Computer Engineering, Computer Science, or a related field. 16+ years of Experience in leading and delivering large-scale data solutions, managing enterprise-level projects, and overseeing technical and advisory teams. Proficiency in at least 2 cloud technologies AWS, Azure, GCP, with extensive experience deploying AI-driven solutions at scale. Hands-on experience in creating data warehouse, Data lakes, ETL/ELT, Data pipelines, and reporting/analytic tools and environment Good knowledge in in Gen AI area and how data can support the advent of GenAI Proven experience in deploying cloud solutions at scale, with expertise in architecting, designing, and delivering AI-powered data solutions for large-scale projects. 5+ years of Hands-on Expertise in building and managing data advisory practices, with a strong focus on Cloud, AI/ML , data modeling, Data ops, and Datalake. Deep Expertise in one of the sectors like Life Sciences, CPR, Auto or Manufacturing Location - Bengaluru/Mumbai/Pune/Gurugram
Posted 3 months ago
7 - 9 years
9 - 13 Lacs
Hyderabad
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft Azure Analytics Services, Synapse Minimum 7.5 year(s) of experience is required Educational Qualification : Min 15 years of education Summary :As a Data Platform Engineer, you will be responsible for assisting with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models using Microsoft Azure Data Services. Roles & Responsibilities: Design and develop data platform components using Microsoft Azure Data Services. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Assist with the development of data platform blueprints and designs. Ensure data platform components are scalable, secure, and performant. Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: Must To Have Skills:Experience with Microsoft Azure Data Services. Good To Have Skills:Experience with Microsoft Azure Analytics Services and Synapse. Strong understanding of data platform components and architecture. Experience with designing and developing scalable, secure, and performant data platforms. Experience with data modeling and database design. Experience with ETL/ELT processes and tools. Additional Information: The candidate should have a minimum of 7.5 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Hyderabad office. Qualification Min 15 years of education
Posted 3 months ago
10 - 15 years
30 - 35 Lacs
Chennai, Hyderabad
Hybrid
Client Summary: This position is for our partner MGT ( www.mgt.us ), a $440M US-based national technology and advisory solutions leader serving state, local government, education (SLED) and targeted commercial clients across the U.S. Their specialized solutions solve the most critical issues that live at the top of clients leadership agendas. MGT partners to help clients build resilience, implement systematic change and strengthen their foundations, now and for the future. Shift Time Zone: Candidates should be flexible to work until midnight (India time) as needed Key Responsibilities: Design and develop efficient and scalable Snowflake data pipelines and data models Write complex SQL queries to extract, transform, and load data into Snowflake Optimize query performance and troubleshoot data quality issues Collaborate with data analysts and business users to understand data requirements and translate them into technical solutions Stay up to date with the latest Snowflake features and best practices Ensuring data security and governance by implementing security features like encryption, role-based access control, and data masking Collaborating with cross-functional teams to meet data analysis needs Minimum Qualifications, Skills, and Experience: Over all 10 + years of data warehousing and 3-5 years of Snowflake Strong SQL skills, including knowledge of advanced SQL functions and techniques Experience with Snowflake data warehousing platform Proficiency in ETL/ELT processes and tools (e.g. dbt, Rivery, Fivetran) Understanding of data modeling and data warehousing concepts Ability to work independently and as part of a team Strong problem-solving and analytical skills Preferred qualifications, skills and experience: Experience with Python or other scripting languages Knowledge of cloud platforms (e.g. AWS, Azure, GCP) Experience with data visualization tools (e.g. Tableau, Power BI) Snowflake certifications Why Join Us? You will benefit from a great culture and environment that contributes to your career growth. Benefits include hybrid work mode, 5-day workweek , and comprehensive insurance coverage, including Mediclaim, healthcare, and term insurance. How to Apply? Send your resume to: amrita.dostider@learningmate.com Application Deadline: 3rd April 2025 Acknowledgement: We are an equal opportunity employer. We commit to celebrate diversity, equity, and inclusion in the workplace.
Posted 3 months ago
7 - 12 years
18 - 30 Lacs
Pune, Delhi NCR, Bengaluru
Hybrid
Role & responsibilities Design, develop, and maintain robust and scalable data pipelines using modern ETL/ELT tools and techniques. Implement and manage data orchestration tools such as DBT, Fivetran, Stitch, or Matillion. Build and optimize data models for various analytical and reporting needs. Ensure data quality and integrity through rigorous testing and validation. Monitor and troubleshoot data pipelines and infrastructure, proactively identifying and resolving issues. Collaborate with data scientists and analysts to understand their data requirements and provide support. Stay up-to-date with the latest data engineering trends and technologies. Contribute to the development and improvement of our data engineering best practices. Mentor junior data engineers and provide technical guidance. Participate in code reviews and contribute to a collaborative development environment. Document data pipelines and infrastructure for maintainability and knowledge sharing. Contribute to the architecture and design of our overall data platform. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 7+ years of proven experience as a Data Engineer, preferably in a fast-paced environment. Deep understanding of data warehousing concepts and best practices. Hands-on experience with at least one data orchestration tool (DBT, Fivetran, Stitch, Matillion). Proficiency in SQL and extensive experience with data modeling. Experience with cloud-based data warehousing solutions (e.g., Snowflake, BigQuery, Redshift). Experience with programming languages like Python or Scala is highly preferred.
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
GCP:Data engineer Hands on and deep experience working with Google Data Products (e.g. BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep, etc.). Hands on experience in SQL and Unix scripting Experience in Python and Kafka. ELT Tool Experience and Hands on DBT Google Cloud Professional Data Engineers are responsible for developing Extract, Transform, and Load (ETL) processes to move data from various sources into the Google Cloud Platform. Detailed JD : Must Have Around 8 to 11 years of experience with a strong knowledge in migrating on premise ETLs to Google Cloud Platform (GCP) ? 2-3 years of Strong Bigquery+GCP Experience. Very Strong SQL writing skills Hand on Experience in Python Programming Hands on experience in Design, Development, Implementation of Data Warehousing in ETL process. Experience in IT data analytics projects, hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Google Cloud Storage, Composer, Dataflow, Cloud Functions. GCP certified Associate Cloud Engineer. Practical understanding of the Data modelling (Dimensional & Relational), Performance Tuning and debugging. Extensive experience in the Data Warehousing using Data Extraction, Data Transformation, Data Loading and business intelligence technologies using ELT design Working experience in CI /CD using Gitlab and Jenkins. Good to Have DBT tool experience Practical experience in Big Data application development involving various data processing techniques for Data Ingestion, Data Modelling In-Stream data processing and Batch Analytics using various distributions of Hadoop and its ecosystem tools like HDFS, HIVE, PIG, Sqoop, Spark. Document all the work implemented using Confluence and track all requests and changes using Jira. Involved in both technical and managerial activities and experience in GCP Responsibilities ? Create and maintain optimal data pipeline architecture. ? Assemble large, complex data sets that meet functional / non-functional business requirements. ? Identify, design, and implement internal process improvements:automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ? Build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP data warehousing technologies. ? Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. ? Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ? Keep our data separated and secure across national boundaries through data centers and GCP regions. ? Work with data and analytics experts to strive for greater functionality in the data systems. Qualifications ? Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. ? Experience building and optimizing data warehousing data pipelines (ELT and ETL), architectures and data sets. ? Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. ? Strong analytic skills related to working with unstructured datasets. ? Build processes supporting data transformation, data structures, metadata, dependency and workload management. ? A successful history of manipulating, processing and extracting value from large disconnected datasets. ? Working knowledge of message queuing, stream processing, and highly scalable big data data stores. ? Strong project management and organizational skills. ? Experience supporting and working with cross-functional teams in a dynamic environment. Technical Skillset ? Experience with data warehouse tools:GCP BigQuery, GCP BigData, Dataflow, Teradata, etc. ? Experience with relational SQL and NoSQL databases, including PostgreSQL and MongoDB. ? Experience with data pipeline and workflow management tools:Data Build Tool (DBT), Airflow, Google Cloud Composer, Google Cloud PubSub, etc. ? Experience with GCP cloud services:Primarily BigQuery, Kubernetes, Cloud Function, Cloud Composer, PubSub etc. ? Experience with object-oriented/object function scripting languages:Python, Java, Terraform etc. ? Experience with CICD pipeline and workflow management tools:GitHub Enterprise, Cloud Build, Codefresh etc. ? Experience with Data Analytics and Visualization Tools:Tableau BI Tool (OnPrem and SaaS), Data Analytics Workbench (DAW), Visual Data Studio etc. ? GCP Data Engineer certification is mandatory
Posted 3 months ago
6 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
About Us HighRadius, a renowned provider of cloud-based Autonomous Software for the Office of the CFO, has transformed critical financial processes for over 800+ leading companies worldwide. Trusted by prestigious organizations like 3M, Unilever, Anheuser-Busch InBev, Sanofi, Kellogg Company, Danone, Hershey's, and many others, HighRadius optimizes order-to-cash, treasury, and record-to-report processes, earning us back-to-back recognition in Gartner's Magic Quadrant and a prestigious spot in Forbes Cloud 100 List for three consecutive years. With a remarkable valuation of $3.1B and an impressive annual recurring revenue exceeding $100M, we experience a robust year-over-year growth of 24%. With a global presence spanning 8+ locations and a recent addition in Poland, we're in the pre-IPO stage, poised for rapid growth. We invite passionate and diverse individuals to join us on this exciting path to becoming a publicly traded company and shape our promising future. Job Summary HighRadius is looking for a dynamic Java professional to join our Engineering Team. This role responsibilities include participation in Software Development activities, writing clean and efficient code for various applications and running tests to improve system functionality. Writes code that is easily maintainable, highly reliable and demonstrates knowledge of common programming best practices. Mentor junior members of the team in delivering sprint stories and tasks. Key Responsibilities Demonstrate excellent Leadership, hands on Technical, Design & Architecture skills and lead the team to arrive at optimal solutions for business challenges. Review requirements, specifications, and create technical design documents Estimate tasks and meet milestones and deadlines appropriately Provides technical guidance and support during the development phase of a project and ensure project delivery Follow best practices of the industry for delivering high-quality software in a timely manner and to the specification. Identify risks or opportunities associated with current or new technology use. Plan and execute PoCs as necessary. Strive for continuous improvement of Development Process & standards Good interpersonal communication and organizational skills to contribute as a leading member of global, distributed teams focused on delivering quality, high performant and scalable solutions Demonstrated ability to rapidly learn new and emerging technologies and vision of technology transformation developing a vision of their suitability Effectively communicate with team members, project managers, clients and other stakeholders as required. Skill & Experience Needed Experience in Payment domain is must have Bachelors Degree required Experience range : 6+ years Technology Stack : Core Java, Java 8, collections, Exception Handling, Hibernate, Spring, SQL , Ext.js/React.js Good to have - Ext.Js or any UI framework experience, Elastic search, Cloud architecture (AWS, GCP, Azure ) Knowledge of Design patterns, Jenkins, GIT, Grafana, ELT, JUNIT Deep understanding and experience of architecting and deployment of end to end scalable web application. Deep understanding of SCRUM/Agile process, project tracking, monitoring, risk management. What you get Competitive salary Fun-filled work culture (https://www.highradius.com/culture/) Equal employment opportunities Opportunity to build with a pre-IPO Global SaaS Centaur.
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
Job Summary HighRadius is looking for a dynamic Java professional to join our Engineering Team. This role responsibilities include participation in Software Development activities, writing clean and efficient code for various applications and running tests to improve system functionality. Writes code that is easily maintainable, highly reliable and demonstrates knowledge of common programming best practices. Mentor junior members of the team in delivering sprint stories and tasks. Key Responsibilities Work independently to translate product requirements to working software with high quality Participate in collaborative software development with peers within Team through code review and design discussions Produce highly performant software in cloud native environment Ability to debug performance and functional issues in production Participate in raising the technology bar within team Skill & Experience Needed Bachelors Degree required Experience range : 4+ years Technology Stack : Core Java, Java 8, Hibernate, Spring, SQL Good to have - Ext.Js or any UI framework experience, Elastic search, Cloud architecture (AWS, GCP, Azure) Knowledge of Design patterns, Jenkins, GIT, Grafana, ELT, JUNIT What Youll Get Competitive salary Fun-filled work culture (https://www.highradius.com/culture/) Equal employment opportunities Opportunity to build with a pre-IPO Global SaaS Centaur
Posted 3 months ago
4 - 6 years
6 - 8 Lacs
Pune
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Design, develop, and maintain scalable ETL/ELT pipelines to process structured, semi-structured, and unstructured data on the cloud. Build and optimize cloud storage, data lakes, and data warehousing solutions using platforms like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Develop cloud utility functions using services like AWS Lambda, AWS Step Functions, Cloud Run, Cloud Functions, and Azure Functions. Utilize cloud-native data integration tools, such as Azure Databricks, Azure Data Factory, AWS Glue, AWS EMR, Dataflow, and Dataproc, to transform and analyze data. Your Profile Has 4-5 years of IT experience with minimum 3 years of experience in creating data pipelines, ETL/ELT on cloud. Has experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data-warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience in writing cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Exposure to cloud Dev ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 3 months ago
3 - 6 years
5 - 8 Lacs
Pune
Work from Office
What will you do: Work closely with team members and stakeholders to turn business problems into analytical projects, translated requirements, and solutions Work cross-functionally with teams on data migration, translation, and organizational initiatives Translate large volumes of raw, unstructured data into highly visual and easily digestible formats Develop and manage data pipelines for predictive analytics modeling, model lifecycle management, and deployment (Extract-Load-Transform, ELT) Recommend ways to improve data reliability, efficiency, and quality Help create, maintain, and implement tools, libraries, and systems to increase the efficiency and scalability of the team Develop and maintain proper controls and governance for data access Communicate data-related challenges and help to prioritize resolutions based on alignment with organizational goals Consult and assist consumers of XE managed data What will you bring: Ability to critically analyze data, testing hypothesis, and validating data quality Ability to problem solve and to test and implement new technologies and tools Solid grasp of data systems and how they interact with each other Exceptional analytical skills to detect the source and resolution of highly complex problems Proficient Python programming skills are required and experience with Python-based analysis frameworks such as pandas, pyspark and pyarrow a plus Experience with Starburst, Snowflake and other Cloud Data Warehousing Data Lake preferred Excellent data manipulation skills required, namely using SQL and the Python Scientific stack (pandas, numpy, sci-kit learn) Experience extracting unstructured data from REST APIs, NoSQL databases, and object storage (Ceph/S3) Experience with Linux system administration, shell scripting, and virtualization technology (containers) is required Mastery of git (version control) and experience with versioning, merge request, review, etc. processes and techniques is required Experience with distributed computing frameworks (eg., dask, pyspark) preferred OpenShift application development and administration is a plus Experience deploying applications using PaaS technologies (e.g,. OpenShift, Airflow) is a plus Well-versed and a desire to stay on top of the current industry landscape of computer software, programming languages, and technology Bachelor's degree in a related field (e.g., Computer Science or Software Engineering) with 5+ years of relevant working experience or Masters degree with 3+ years of working experience About Red Hat is the worlds leading provider of enterprise software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Diversity, Equity & Inclusion at Red Hat Red Hats culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from diverse backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions of diversity that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email . General inquiries, such as those regarding the status of a job application, will not receive a reply.
Posted 3 months ago
6 - 11 years
19 - 32 Lacs
Bengaluru, Hyderabad, Kolkata
Work from Office
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Snowflake Data Engineer In this role, the Snowflake Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities Should have experience in IT industry. Strong experience in building/designing the data warehouse, data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. Strong understanding on Snowflake Architecture Able to create the design and data modelling independently. Able to create the high level and low-level design document based on requirement. Strong experience with building productionized data ingestion and data pipelines in Snowflake Should have prior experience as an Architect in interacting with customer and Team/Delivery leaders and vendor teams. Should have strong experience in migration/greenfield projects in Snowflake. Should have experience in implementing Snowflake best practices for network policy , Storage integration, data governance, cost optimization, resource monitoring, data ingestion, transformation, consumption layers Should have good exp on Snowflake RBAC and data security. Should have good experience in implementing strategy for CDC or SCD type 2 Strong experience in Snowflake features including new snowflake features. Should have good experience in Python. Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) Should have experience in DBT. Must to have Snowflake SnowPro Core or SnowPro Advanced Architect certification. Should have experience/knowledge in orchestration and scheduling tools experience. Should have good understanding on ETL processes and ETL tools. Good understanding of agile methodology. Good to have some understanding on GenAI. Good to have exposure to other databases such as Redshift, Databricks, SQL Server, Oracle, PostgreSQL etc. Able to create POCs, roadmaps, solution architecture, estimations & implementation plan Experience for Snowflake integrations with other data processing. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and strong experience as a Snowflake Architect. Skill Metrix: Snowflake, Python, AWS/Azure, Data Modeling, Design Patterns, DBT, ETL process and Data Warehousing concepts. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 3 months ago
2 - 6 years
12 - 16 Lacs
Bengaluru
Work from Office
Responsibilities Design, construct, install, test, and maintain highly scalable data management systems using big data technologies such as Apache Spark (with focus on Spark SQL) and Hive. Manage and optimize our data warehousing solutions, with a strong emphasis on SQL performance tuning. Implement ETL/ELT processes using tools like Talend or custom scripts, ensuring efficient data flow and transformation across our systems. Utilize AWS services including S3, EC2, and EMR to build and manage scalable, secure, and reliable cloud-based solutions. Develop and deploy scripts in Linux environments, demonstrating proficiency in shell scripting. Utilize scheduling tools such as Airflow or Control-M to automate data processes and workflows. Implement and maintain metadata-driven frameworks, promoting reusability, efficiency, and data governance. Collaborate closely with DevOps teams utilizing SDLC tools such as Bamboo, JIRA, Bitbucket, and Confluence to ensure seamless integration of data systems into the software development lifecycle. Communicate effectively with both technical and non-technical stakeholders, for handover, incident management reporting, etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated expertise in Big Data Technologies, specifically Apache Spark (focus on Spark SQL) and Hive. Extensive experience with AWS services, including S3, EC2, and EMR. Strong expertise in Data Warehousing and SQL, with experience in performance optimization Experience with ETL/ELT implementation (such as Talend) Proficiency in Linux, with a strong background in shell scripting Preferred technical and professional experience Familiarity with scheduling tools like Airflow or Control-M. Experience with metadata-driven frameworks. Knowledge of DevOps tools such as Bamboo, JIRA, Bitbucket, and Confluence. Excellent communication skills and a willing attitude towards learning
Posted 3 months ago
6 - 11 years
12 - 16 Lacs
Bengaluru
Work from Office
Responsibilities Design, construct, install, test, and maintain highly scalable data management systems using big data technologies such as Apache Spark (with focus on Spark SQL) and Hive. Manage and optimize our data warehousing solutions, with a strong emphasis on SQL performance tuning. Implement ETL/ELT processes using tools like Talend or custom scripts, ensuring efficient data flow and transformation across our systems. Utilize AWS services including S3, EC2, and EMR to build and manage scalable, secure, and reliable cloud-based solutions. Develop and deploy scripts in Linux environments, demonstrating proficiency in shell scripting. Utilize scheduling tools such as Airflow or Control-M to automate data processes and workflows. Implement and maintain metadata-driven frameworks, promoting reusability, efficiency, and data governance. Collaborate closely with DevOps teams utilizing SDLC tools such as Bamboo, JIRA, Bitbucket, and Confluence to ensure seamless integration of data systems into the software development lifecycle. Communicate effectively with both technical and non-technical stakeholders, for handover, incident management reporting, etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated expertise in Big Data Technologies, specifically Apache Spark (focus on Spark SQL) and Hive. Extensive experience with AWS services, including S3, EC2, and EMR. Strong expertise in Data Warehousing and SQL, with experience in performance optimization Experience with ETL/ELT implementation (such as Talend) Proficiency in Linux, with a strong background in shell scripting Preferred technical and professional experience Familiarity with scheduling tools like Airflow or Control-M. Experience with metadata-driven frameworks. Knowledge of DevOps tools such as Bamboo, JIRA, Bitbucket, and Confluence. Excellent communication skills and a willing attitude towards learning
Posted 3 months ago
3 - 7 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : As per Accenture Standards Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with Microsoft Azure Data Services and collaborating with cross-functional teams to deliver impactful solutions. Roles & Responsibilities: Lead the design, development, and deployment of applications using Microsoft Azure Data Services. Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure timely delivery of solutions. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Conduct detailed analysis of business requirements, translating them into technical specifications and design documents. Ensure the quality and integrity of the application through rigorous testing and debugging. Professional & Technical Skills: Must To Have Skills:Azure Data Factory (Data Pipeline and Framework implementation) SQL Server (Strong SQL Development) including SQL Stored Procedures ETL/ELT , DWH concepts Azure DevOps AZure Blob, Gen1/Gen2 Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions. This position is based at our Bengaluru office. Qualification As per Accenture Standards
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2