Jobs
Interviews

127 Apache Nifi Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

karnataka

On-site

We are seeking a highly skilled Integration Specialist proficient in Apache NiFi, Java, and Spring Boot to provide support to a leading bank in the UAE from our offshore development center. **Education:** A degree or post-graduate qualification in Computer Science or a related field is required, or equivalent industry experience. **Experience:** - Minimum 5 years of experience in implementing end-to-end integration solutions using NiFi processors. - Minimum 5 years of experience in Java and Spring Boot with Microservices. - Minimum 3 years of experience in application security including SSL Certificates and cryptography. - Minimum 2 years of experience in Distributed architecture. **Technical Skills:** - Proficient in designing and developing NiFi and MiNiFi Flows using various processors, including failover scenarios. - Strong knowledge of SSL Certificates, communication protocols like SFTP, Site-to-Site, and cryptography. - Proficient in distributed architecture using zookeeper. - Proficient in Java, microservices, and understanding of distributed services resiliency and monitoring in a production environment. **Functional Skills:** - Experience in adhering to best Coding, Security, Unit testing, and Documentation standards and practices. - Preferred experience in the Banking/Financial domain and Agile methodology. - Ensure the quality of technical and application architecture and design systems organization-wide. - Conduct effective research and benchmark technology against other best-in-class technologies. **Soft Skills:** - Excellent communication skills, positive attitude, and enthusiasm towards learning. - Self-motivated with the ability to take ownership and drive tasks independently. - Collaborative team player with strong interpersonal skills to engage with senior management in IT and Business. - Capable of training and mentoring team members. **Requirements:** Immediate joiners or candidates with a maximum 30-day notice period are preferred. This role offers a challenging opportunity to work with cutting-edge technologies and contribute to the success of a top bank in the UAE.,

Posted 1 day ago

Apply

6.0 - 15.0 years

0 Lacs

karnataka

On-site

You have a unique opportunity to join as an Integration Architect specializing in Apache Nifi and Kubernetes. Your primary responsibilities will involve developing and performing automated builds, testing, and deployments in conjunction with Apache Nifi v2. This role requires a minimum of 15+ years of relevant experience and an in-depth understanding of various technologies and tools. Your expertise should include proficiency in the Linux CLI, extensive knowledge of SQL, and familiarity with ServiceNow CMDB. You must possess a good grasp of security principles, particularly OAuth basic/2.0, and IP networking concepts such as TCP, UDP, DNS, DHCP, firewalls, and IP routing. Additionally, experience with web services using SOAP/REST API, scripting languages like Bash/RegEx/Python/Groovy, and Java programming for custom code creation is essential. As an Integration Architect, you will be expected to build data integration workflows using NiFi, NiFi registry, and custom NiFi processors. Performance tuning of NiFi processing, working with Apache Kafka, and following Agile methodology are also crucial aspects of the role. Your responsibilities will extend to designing, deploying, and managing Kubernetes clusters, infrastructure-as-code tools like Crossplane, and container orchestration. Proficiency in GitOps practices, container monitoring/logging tools, networking principles, and identity/access management tools is highly desirable. You will play a pivotal role in maintaining Kubernetes clusters for open-source applications, implementing GitOps continuous delivery with ArgoCD, managing cloud resources with Crossplane API, and ensuring secure access with Keycloak. Your expertise in secrets management, API gateway management, persistent storage solutions, and certificate management will be invaluable for the organization. Furthermore, implementing security best practices, documenting procedures, and contributing to open-source projects are key elements of this dynamic role. The preferred qualifications for this position include a Bachelor's degree in computer science or a related field, Kubernetes certification, and knowledge of software-defined networking solutions for Kubernetes. Your soft skills, such as effective communication, stakeholder management, taking ownership, and autonomy, will be essential in leading technical discussions and resolving issues effectively. If you are passionate about integration architecture, possess a strong technical background, and are eager to work in a collaborative environment, this role offers a challenging yet rewarding opportunity to showcase your skills and contribute to cutting-edge projects in a fast-paced setting.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The role of Lead, Software Engineer at Mastercard involves playing a crucial part in the Data Unification process across different data assets to create a unified view of data from multiple sources. This position will focus on driving insights from available data sets and supporting the development of new data-driven cyber products, services, and actionable insights. The Lead, Software Engineer will collaborate with various teams such as Product Manager, Data Science, Platform Strategy, and Technology to understand data needs and requirements for delivering data solutions that bring business value. Key responsibilities of the Lead, Software Engineer include performing data ingestion, aggregation, and processing to derive relevant insights, manipulating and analyzing complex data from various sources, identifying innovative ideas and delivering proof of concepts, prototypes, and proposing new products and enhancements. Moreover, integrating and unifying new data assets to enhance customer value, analyzing transaction and product data to generate actionable recommendations for business growth, and collecting feedback from clients, development, product, and sales teams for new solutions are also part of the role. The ideal candidate for this position should have a good understanding of streaming technologies like Kafka and Spark Streaming, proficiency in programming languages such as Java, Scala, or Python, experience with Enterprise Business Intelligence Platform/Data platform, strong SQL and higher-level programming skills, knowledge of data mining and machine learning algorithms, and familiarity with data integration tools like ETL/ELT tools including Apache NiFi, Azure Data Factory, Pentaho, and Talend. Additionally, they should possess the ability to work in a fast-paced, deadline-driven environment, collaborate effectively with cross-functional teams, and articulate solution requirements for different groups within the organization. It is essential for all employees working at or on behalf of Mastercard to adhere to the organization's security policies and practices, ensure the confidentiality and integrity of accessed information, report any suspected information security violations or breaches, and complete all mandatory security trainings in accordance with Mastercard's guidelines. The Lead, Software Engineer role at Mastercard offers an exciting opportunity to contribute to the development of innovative data-driven solutions that drive business growth and enhance customer value proposition.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have hands-on experience in deploying and managing large-scale dataflow products such as Cribl, Logstash, or Apache NiFi. Additionally, you should be proficient in integrating data pipelines with cloud platforms like AWS, Azure, Google Cloud, and on-premises systems. It is essential to have experience in developing and validating field extraction using regular expressions. A strong understanding of Operating Systems and Networking concepts is required, including Linux/Unix system administration, HTTP, and encryption. You should possess knowledge of software version control, deployment, and build tools following DevOps SDLC practices such as Git, Jenkins, and Jira. Strong analytical and troubleshooting skills are crucial for this role, along with excellent verbal and written communication skills. An appreciation of Agile methodologies, specifically Kanban, is also expected. Desirable skills for this position include enterprise experience with a distributed event streaming platform like Apache Kafka, AWS Kinesis, Google Pub/Sub, or MQ. Experience in infrastructure automation and integration, preferably using Python and Ansible, would be beneficial. Familiarity with cybersecurity concepts, event types, and monitoring requirements is a plus. Experience in Parsing and Normalizing data in Elasticsearch using Elastic Common Schema (ECS) would also be advantageous.,

Posted 4 days ago

Apply

5.0 - 8.0 years

7 - 7 Lacs

Chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description: This fast-paced job position is intended for people who like to build analytics platforms and tooling which deliver real value to the business. Applicants should have a strong desire to learn new technologies and be interested in providing guidance which will help drive the adoption of these tools. The Analytics Data Management (ADM) Product Engineer will assist with the engineering of strategic data management platforms from Informatica, primarily Enterprise Data Catalog and Apache NiFi. Other technologies include Informatica: IICS/IDMC, PowerCenter, Data Catalog, Master Data Management; IBM: Information Server, Cloud Pak for Data (CP4D); Google: Cloud Data Fusion. This person will also collaborate with Infrastructure Architects to design and implement environments based on these technologies for use in the client's enterprise data centers. Platforms may be based on-premises, or hosted in Google Cloud offering Skills Required: Informatica Skills Preferred: Cloud Infrastructure Experience Required: Informatica: IICS/IDMC, PowerCenter, Data Catalog, Master Data Management; Apache NiFi Experience Required Informatica Products: Installation, configuration, administration, and troubleshooting. Specific experience with Informatica Data Catalog is essential. Apache Nifi : Strong Java development experience to create custom NiFi processors and expertise in deploying and managing NiFi applications on Red Hat OS environments. Google Cloud Platform (GCP): Provisioning, administration, and troubleshooting of products. Specific experience with DataPlex or Google Cloud Data Fusion (CDF) is highly preferred. Experience Range: 5-8 years Experience Preferred: Summary of Responsibilities: Engineer, test, and modernize data management platforms primarily Informatica Enterprise Data Catalog and Apache NiFi. Enable cloud migrations for Analytics platforms Define, document, and monitor global (Follow-the-Sun) support procedures (Incident Management, Request Management, Event Management, etc). Provide Asia-Pacific (IST) 2nd level support for these products. Responsibilities Detail: Installing and configuring products, Working with platform teams support to resolve issues, Working with vendor support to resolve issues, Thoroughly testing product functionality on the platform; Developing custom installation guides, configurations, and scripts that are consistent with the client's IT security policy; Providing 2nd level support regarding product related issues; Developing new tools and processes to ensure effective implementation and use of the technologies. Implementing, monitoring/alerting, and analyzing usage data to ensure optimal performance of the infrastructure. Maintaining a SharePoint site with relevant documentation, FAQs, processes, etc. necessary to promote and support the use of these technologies. Required Skills Ability collect and clearly document requirements. Ability to prioritize work and manage multiple assignments. Ability to create & execute detailed project plans and test plans. Education Required: Bachelor's Degree Education Preferred: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 4 days ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Atomicwork is on a mission to transform the digital workplace experience by uniting people, processes, and platforms through AI automation Our team is building a modern service management platform that enables growing businesses to reduce operational complexity and drive business success, We are seeking a skilled and motivated Data Pipeline Engineer to join our team In this role, you will be responsible for designing, building, and maintaining scalable data pipelines that support our enterprise search capabilities Your work will ensure that data from various sources is efficiently ingested, processed, and indexed, enabling seamless and secure search experiences across the organisation, This position is based out of our Bengaluru office We offer competitive pay to employees and practical benefits for their whole family, If this sounds interesting to you, read on, What Were Looking For (Qualifications) We value hands-on skills and a proactive mindset Formal qualifications are less important than your ability to deliver results and collaborate effectively, Proficiency in programming languages such as Python, Java, or Scala, Strong experience with data pipeline frameworks and tools ( e-g , Apache Airflow, Apache NiFi), Experience with search platforms like Elasticsearch or OpenSearch, Familiarity with data ingestion, transformation, and indexing processes, Understanding of enterprise search concepts, including crawling, indexing, and query processing, Knowledge of data security and access control best practices, Experience with cloud platforms (AWS, GCP, or Azure) and related Backend Engineer Search/Integrations services, Familiarity with Model Context Protocol (MCP) is a strong plus Strong problem-solving and analytical skills, Excellent communication and collaboration What Youll Do (Responsibilities) Design, develop, and maintain data pipelines for enterprise search applications, Implement data ingestion processes from various sources, including databases, file systems, and APIs, Develop data transformation and enrichment processes to prepare data for indexing, Integrate with search platforms to index and update data efficiently, Ensure data quality, consistency, and integrity throughout the pipeline, Monitor pipeline performance and troubleshoot issues as they arise, Collaborate with cross-functional teams, including data scientists, engineers, and product managers, Implement security measures to protect sensitive data during processing and storage, Document pipeline architecture, processes, and best practices, Stay updated with industry trends and advancements in data engineering and enterprise search, Why we are different (culture) As a part of Atomicwork, you can shape our company and business from idea to production Our cultural values also set the bar high, helping us create a better workplace for everyone, Agency: Be self-directed Take initiative and solve problems creatively, Taste: Hold a high bar Sweat the details Build with care and discernment, Ownership: We demonstrate unwavering commitment to our mission and goals, taking full responsibility for triumphs and setbacks, Mastery: We relentlessly pursue continuous self-improvement as individuals and teams, dedicating ourselves to constant learning and growth, Impatience: We recognize that our world moves swiftly and is driven by an unyielding desire to progress with every endeavor, Customer Obsession: We place our customers at the heart of everything we do, relentlessly seeking to understand their needs and exceed their expectations, What we offer (compensation and benefits) We are big on benefits that make sense to you and your family, Fantastic team ?the #1 reason why everybody joins us ? Convenient offices ? well-located offices spread over five different cities ? Paid time off ? Unlimited sick leaves and 15 days off every year, Health insurance ? comprehensive health coverage upto 75% premium covered ?? Flexible allowances ? with hassle-free reimbursements across spends ?? Annual outings ? for everyone to have fun together, What next (applying for this role) Click on the apply button to get started with your application, Answer a few questions about yourself and your work, Wait to hear from us about the next steps, Do you have anything else to tell usEmail careers@atomicwork and let us know whats on your mind, Show

Posted 4 days ago

Apply

2.0 - 7.0 years

6 - 10 Lacs

Pune

Work from Office

Atos is seeking a highly skilled and experienced Kubernetes Expert with strong programming skills to join our dynamic team. As a Kubernetes Expert, you will play a crucial role in designing, implementing, and maintaining our Kubernetes infrastructure to ensure scalability, reliability, and efficiency of our services. Responsibilities: Develop and maintain Kubernetes clusters for open-source applications (like Apache Nifi, Apache Airflow), ensuring high availability, scalability, and security. Deploy, configure, and manage clusters on Kubernetes, including setting up leader election, shared state management, and clustering. Utilize ArgoCD for GitOps continuous delivery, automating the deployment of applications and resources within the Kubernetes environment. Use Crossplane to manage cloud resources and services, ensuring seamless integration and provisioning. Implement and manage identity and access management using Keycloak, ensuring secure access to the application. Utilize Azure Vault for securely storing and managing sensitive information such as API keys, passwords, and other secrets required for data workflows. Manage ingress traffic to the application using Kong, providing features such as load balancing, security, and monitoring of API requests. Ensure the availability and management of persistent block storage for various application repositories. Set up and manage certificates using Cert-Manager and Trust-Manager to establish secure connections between the applications. Implement monitoring and observability solutions to ensure the health and performance of the application and its underlying infrastructure. Troubleshoot and resolve issues related to Kubernetes infrastructure, including performance bottlenecks, resource constraints, and network connectivity. Implement security best practices for Kubernetes environments, including RBAC, network policies, secrets management, and define strategy to integrate security with various virtualization environment service providers like VMware or cloud hyperscalers. Stay updated with the latest Kubernetes features, tools, and technologies, and evaluate their applicability to improve our infrastructure and workflows. Mentor and train team members on Kubernetes concepts, best practices, and tools. Contribute to the development and maintenance of internal documentation, runbooks, and knowledge base articles related to Kubernetes. Requirements: Bachelor's degree in Computer Science, Engineering, or related field. Master's degree preferred. 2+ years of experience in designing, deploying, and managing Kubernetes clusters in production environments. Solid experience with infrastructure-as-code tools such as Crossplane. Proficiency in Kubernetes and container orchestration. Knowledge of Apache NiFi 2.0, including clustering and data flow management. Familiarity with GitOps practices and tools like ArgoCD. Experience with container monitoring and logging tools such as Prometheus and Grafana. Solid understanding of networking principles, including DNS, load balancing, and security in Kubernetes environments. Experience with identity and access management tools like Keycloak. Proficiency in secrets management using tools like Azure Vault. Experience with API gateway management using Kong. Knowledge of persistent storage solutions for Kubernetes. Experience with certificate management using Cert-Manager and Trust-Manager. Preferred Qualifications: Kubernetes certification (e.g., Certified Kubernetes Administrator - CKA, Certified Kubernetes Application Developer CKAD, Certified Kubernetes Security Specialist CKS). Familiarity with CI/CD pipelines and tools such as GitHub. Knowledge of software-defined networking (SDN) solutions for Kubernetes. Contributions to open-source projects related to Kubernetes or containerization technologies.

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an Apache NiFi Developer, you should have a post-graduate degree in Computer Science or a related field with a minimum of 5 years of experience in implementing end-to-end integration solutions using NiFi processors. It is essential to have at least 5 years of experience in Java and Spring Boot with Microservices, along with 3 years of experience in application security aspects like SSL Certificates and cryptography. Additionally, a minimum of 2 years of experience in Distributed architecture is required. Your technical skills should include expertise in designing and developing NiFi and MiNiFi Flows using various processors, including failover scenarios. You must also excel in SSL Certificates, communication protocols such as SFTP, Site to Site, and cryptography. Proficiency in distributed architecture using Zookeeper, Java, and microservices is crucial. Familiarity with distributed services resiliency and monitoring in a production environment is also expected. In terms of functional skills, you should have experience following best coding, security, unit testing, and documentation standards and practices. Previous experience in the Banking/Financial domain is highly desired, along with knowledge of Agile methodology. You will be responsible for ensuring the quality of technical and application architecture and design of systems across the organization. Conducting effective research and benchmarking technology against other best-in-class technologies is an integral part of the role. When it comes to soft skills, excellent communication, a positive attitude towards work, and eagerness to learn new things are essential. You should be a self-motivator and self-starter, capable of owning and driving tasks without supervision while collaborating effectively with teams. Strong interpersonal skills are necessary to interact with and present ideas to senior management in both IT and Business. Furthermore, the ability to train and mentor team members is crucial for success in this role. This role falls under the categories of Middleware Developer, Embedded Developers, Web Developers, Software Engineer, and Linux Engineer. The must-have skills for this position include proficiency in Java (all versions) for at least 5 years, Apache NiFi for 5 years, SpringBoot for 4 years, Microservices for 4 years, Cryptography for 3 years, and SSL for 3 years.,

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL ELT tools like Nifi, Matallion DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment evaluate adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional Requirements like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled ETL Developer with proficiency in Apache NiFi, SQL, and Python to become a valuable member of our team. Your main responsibilities will include the design, implementation, and administration of ETL processes, utilizing your expertise in NiFi, SQL, and Python. It would be advantageous to have experience with Redis, Elasticsearch, Qlik, and RPA tools for this role. This opportunity was presented by Joe Correya from Bydek.,

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer - Scala : Preferred Skills: ===- Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should able to write the query through Jupyter Notebook Orchestration tool like NiFi, AirFlow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for preparing data, developing models, testing them, and deploying them. This includes designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Your role will involve ensuring that algorithms generate accurate user recommendations. Additionally, you will work on turning unstructured data into useful information by auto-tagging images and converting text to speech. Solving complex problems with multi-layered data sets and optimizing existing machine learning libraries and frameworks will be part of your daily tasks. Your responsibilities will also include developing machine learning algorithms to analyze large volumes of historical data for making predictions. You will run tests, perform statistical analysis, and interpret the results, documenting machine learning processes. As a Lead Engineer in ML and Data Engineering, you will oversee the technologies, tools, and techniques used within the team. Collaboration with the team based on business requirements for designing the requirements is essential. You will ensure that development standards, policies, and procedures are adhered to and drive change to implement efficient and effective strategies. Working closely with peers in the business to fully understand the business process and requirements is crucial. Maintenance, debugging, and problem-solving will also be part of your job responsibilities. Ensuring that all software developed within your team meets the business requirements specified and showing flexibility to respond to the changing needs of the business are key aspects of the role. Your technical skills should include 4+ years of experience in Python, API development using Flask/Django, and proficiency in libraries such as Pandas, Numpy, Keras, Scipy, Scikit-learn, PyTorch, Tensor Flow, and Theano. Hands-on experience in Machine Learning (Supervised & Unsupervised) and familiarity with Data Analytics Tools & Libraries are required. Experience in Cloud Data Pipelines and Engineering (Azure/AWS) as well as familiarity with ETL Pipelines/DataBricks/Apache NiFi/Kafka/Talend will be beneficial. Ability to work independently on projects, good written and verbal communication skills, and a Bachelor's Degree in Computer Science/Engineering/BCA/MCA are essential qualifications for this role. Desirable skills include 2+ years of experience in Java.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an Associate Architect (IND) at Elevance Health, you will be responsible for designing and implementing scalable, high-performance ETL solutions for data ingestion, transformation, and loading. You will define and maintain data architecture standards, best practices, and governance policies while collaborating with data engineers, analysts, and business stakeholders to understand data requirements. Your role will involve optimizing existing ETL pipelines for performance, reliability, and scalability, ensuring data quality, consistency, and security across all data flows. In this position, you will lead the evaluation and selection of ETL tools and technologies, providing technical leadership and mentorship to junior data engineers. Additionally, you will be expected to document data flows, architecture diagrams, and technical specifications. It would be beneficial to have experience with Snowflake and Oracle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field, along with at least 8 years of experience in data engineering or ETL development. Strong expertise in ETL tools such as Informatica, Talend, Apache NiFi, SSIS, or similar is essential, as well as proficiency in SQL and experience with relational and NoSQL databases. Experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data modeling, data warehousing, and big data technologies are also required. The ideal candidate will possess strong problem-solving and communication skills, along with good business communication skills. You should be committed, accountable, and able to communicate status to stakeholders in a timely manner. Collaboration and leadership skills are vital for this role, as you will be working with global teams. At Carelon, we promise a world of limitless opportunities to our associates, fostering an environment that promotes growth, well-being, purpose, and a sense of belonging. Our focus on learning and development, innovative culture, comprehensive rewards, and competitive benefits make Carelon an equal opportunity employer dedicated to delivering the best results for our customers. If you require reasonable accommodation during the application process, please request the Reasonable Accommodation Request Form. This is a full-time position based in Bangalore.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for fetching and transforming data from various systems, conducting in-depth analyses to identify gaps, opportunities, and insights, and providing recommendations that support strategic business decisions. Your key responsibilities will include data extraction and transformation, data analysis and insight generation, visualization and reporting, collaboration with cross-functional teams, and building strong working relationships with external stakeholders. You will report to the VP Business Growth and work closely with clients. To excel in this role, you should have proficiency in SQL for data querying and Python for data manipulation and transformation. Experience with data engineering tools such as Spark and Kafka, as well as orchestration tools like Apache NiFi and Apache Airflow, will be essential for ETL processes and workflow automation. Expertise in data visualization tools such as Tableau and Power BI, along with strong analytical skills including statistical techniques, will be crucial. In addition to technical skills, you should possess soft skills such as flexibility, excellent communication skills, business acumen, and the ability to work independently as well as within a team. Your academic qualifications should include a Bachelors or Masters degree in Applied Mathematics, Management Science, Data Science, Statistics, Econometrics, or Engineering. Extensive experience in Data Lake architecture, building data pipelines using AWS services, proficiency in Python and SQL, and experience in the banking domain will be advantageous. Overall, you should demonstrate high motivation, a good work ethic, maturity, personal initiative, and strong oral and written communication skills to succeed in this role.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled ETL Tester with hands-on experience in SQL and Python to join our Quality Engineering team. The ideal candidate will be responsible for validating data pipelines, ensuring data quality, and supporting the end-to-end ETL testing lifecycle in a fast-paced environment. Design, develop, and execute test cases for ETL workflows and data pipelines. Perform data validation and reconciliation using advanced SQL queries. Use Python for automation of test scripts, data comparison, and validation tasks. Work closely with Data Engineers and Business Analysts to understand data transformations and business logic. Perform root cause analysis of data discrepancies and report defects in a timely manner. Validate data across source systems, staging, and target data stores (e.g., Data Lakes, Data Warehouses). Participate in Agile ceremonies, including sprint planning and daily stand-ups. Maintain test documentation including test plans, test cases, and test results. Required qualifications to be successful in this role: 5+ years of experience in ETL/Data Warehouse testing. Strong proficiency in SQL (joins, aggregations, window functions, etc.). Experience in Python scripting for test automation and data validation. Hands-on experience with tools like Informatica, Talend, Apache NiFi, or similar ETL tools. Understanding of data models, data marts, and star/snowflake schemas. Familiarity with test management and bug tracking tools (e.g., JIRA, HP ALM). Strong analytical, debugging, and problem-solving skills. Good to Have: Exposure to Big Data technologies (e.g., Hadoop, Hive, Spark). Experience with Cloud platforms (e.g., AWS, Azure, GCP) and related data services. Knowledge of CI/CD tools and automated data testing frameworks. Experience working in Agile/Scrum teams. Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our teamone of the largest IT and business consulting services firms in the world.,

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 10 Lacs

Hyderabad, Gurugram, Ahmedabad

Work from Office

About the Role: Grade Level (for internal use): 10 The Team: Do you love to collaborate & provide solutions? This team comes together across eight different locations every single day to craft enterprise grade applications that serve a large customer base with growing demand and usage. You will use a wide range of technologies and cultivate a collaborative environment with other internal teams. The Impact: We focus primarily developing, enhancing and delivering required pieces of information & functionality to internal & external clients in all client-facing applications. You will have a highly visible role where even small changes have very wide impact. Whats in it for you? - Opportunities for innovation and learning new state of the art technologies - To work in pure agile & scrum methodology Responsibilities: Design, and implement software-related projects. Perform analyses and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Manage and improve existing solutions. Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What were Looking For: Basic Qualifications: Bachelor's degree in Computer Science or Equivalent 7+ years related experience Passionate, smart, and articulate developer Strong C#, .Net and SQL skills Experience implementing: Web Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests Dependency Injection Able to demonstrate strong OOP skills Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Agile/Scrum experience a plus. Exposure to Data Engineering and Big Data technologies like Hadoop, Big data processing engines/Scala, Nifi and ETL is a plus. Experience of Container platforms is a plus Experience working in cloud computing environment like AWS, Azure , GCP etc. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity . ----------------------------------------------------------- S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. ----------------------------------------------------------- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Chennai

Work from Office

Role Description Provides leadership for the overall architecture, design, development, and deployment of a full-stack cloud native data analytics platform. Designing & Augmenting Solution architecture for Data Ingestion, Data Preparation, Data Transformation, Data Load, ML & Simulation Modelling, Java BE & FE, State Machine, API Management & Intelligence consumption using data products, on cloud Understand Business Requirements and help in developing High level and Low-level Data Engineering and Data Processing Documentation for the cloud native architecture Developing conceptual, logical and physical target-state architecture, engineering and operational specs. Work with the customer, users, technical architects, and application designers to define the solution requirements and structure for the platform Model and design the application data structure, storage, and integration Lead the database analysis, design, and build effort Work with the application architects and designers to design the integration solution Ensure that the database designs fulfill the requirements, including data volume, frequency needs, and long-term data growth Able to perform Data Engineering tasks using Spark Knowledge of developing efficient frameworks for development and testing using (Sqoop/Nifi/Kafka/Spark/Streaming/ WebHDFS/Python) to enable seamless data ingestion processes on to the Hadoop/BigQuery platforms. Enabling Data Governance and Data Discovery Exposure of Job Monitoring framework along validations automation Exposure of handling structured, Un Structured and Streaming data. Technical Skills Experience with building data platform on cloud (Data Lake, Data Warehouse environment, Databricks) Strong technical understanding of data modeling, design and architecture principles and techniques across master data, transaction data and derived/analytic data Proven background of designing and implementing architectural solutions which solve strategic and tactical business needs Deep knowledge of best practices through relevant experience across data-related disciplines and technologies, particularly for enterprise-wide data architectures, data management, data governance and data warehousing Highly competent with database design Highly competent with data modeling Strong Data Warehousing and Business Intelligence skills or including: Handling ELT and scalability issues for enterprise level data warehouse Creating ETLs/ELTs to handle data from various data sources and various formats Strong hands-on experience of programming language like Python, Scala with Spark and Beam. Solid hands-on and Solution Architecting experience in Cloud Technologies Aws, Azure and GCP (GCP preferred) Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming) Hands on working Experience with GCP Services like BigQuery, DataProc, PubSub, Dataflow, Cloud Composer, API Gateway, Datalake, BigTable, Spark, Apache Beam, Feature Engineering/Data Processing to be used for Model development Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data Must be very strong in writing SparkSQL queries Strong organizational skills, with the ability to work autonomously as well as leading a team Pleasant Personality, Strong Communication & Interpersonal Skills Qualifications A bachelor's degree in computer science, computer engineering, or a related discipline is required to work as a technical lead Certification in GCP would be a big plus Individuals in this field can further display their leadership skills by completing the Project Management Professional certification offered by the Project Management Institute.

Posted 1 week ago

Apply

4.0 - 6.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Role Description : As a Senior Software Engineer - ETL - Python at Incedo, you will be responsible for designing and developing ETL workflows to extract, transform, and load data from various sources to target systems. You will work with data analysts and architects to understand business requirements and translate them into technical solutions. You will be skilled in ETL tools such as Informatica or Talend and have experience in programming languages such as SQL or Python. You will be responsible for writing efficient and reliable code that is easy to maintain and troubleshoot. Roles & Responsibilities: Develop, maintain, and enhance software applications for Extract, Transform, and Load (ETL) processes Design and implement ETL solutions that are scalable, reliable, and maintainable Develop and maintain ETL code, scripts, and jobs, ensuring they are efficient, accurate, and meet business requirements Troubleshoot and debug ETL code, identifying and resolving issues in a timely manner Collaborate with cross-functional teams, including data analysts, business analysts, and project managers, to understand requirements and deliver solutions that meet business needs Design and implement data integration processes between various systems and data sources Optimize ETL processes to improve performance, scalability, and reliability Create and maintain technical documentation, including design documents, coding standards, and best practices. Technical Skills : Proficiency in programming languages such as Python for writing ETL scripts. Knowledge of data transformation techniques such as filtering, aggregation, and joining. Familiarity with ETL frameworks such as Apache NiFi, Talend, or Informatica. Understanding of data profiling, data quality, and data validation techniques. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Chennai

Work from Office

As a Senior Software Engineer - ETL - Python at Incedo, you will be responsible for designing and developing ETL workflows to extract, transform, and load data from various sources to target systems. You will work with data analysts and architects to understand business requirements and translate them into technical solutions. You will be skilled in ETL tools such as Informatica or Talend and have experience in programming languages such as SQL or Python. You will be responsible for writing efficient and reliable code that is easy to maintain and troubleshoot. Roles & Responsibilities: Develop, maintain, and enhance software applications for Extract, Transform, and Load (ETL) processes Design and implement ETL solutions that are scalable, reliable, and maintainable Develop and maintain ETL code, scripts, and jobs, ensuring they are efficient, accurate, and meet business requirements Troubleshoot and debug ETL code, identifying and resolving issues in a timely manner Collaborate with cross-functional teams, including data analysts, business analysts, and project managers, to understand requirements and deliver solutions that meet business needs Design and implement data integration processes between various systems and data sources Optimize ETL processes to improve performance, scalability, and reliability Create and maintain technical documentation, including design documents, coding standards, and best practices. Technical Skills Skills Requirements: Proficiency in programming languages such as Python for writing ETL scripts. Knowledge of data transformation techniques such as filtering, aggregation, and joining. Familiarity with ETL frameworks such as Apache NiFi, Talend, or Informatica. Understanding of data profiling, data quality, and data validation techniques. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Role Description As a Senior Software Engineer - ETL - Python at Incedo, you will be responsible for designing and developing ETL workflows to extract, transform, and load data from various sources to target systems. You will work with data analysts and architects to understand business requirements and translate them into technical solutions. You will be skilled in ETL tools such as Informatica or Talend and have experience in programming languages such as SQL or Python. You will be responsible for writing efficient and reliable code that is easy to maintain and troubleshoot. Roles & Responsibilities: Develop, maintain, and enhance software applications for Extract, Transform, and Load (ETL) processes Design and implement ETL solutions that are scalable, reliable, and maintainable Develop and maintain ETL code, scripts, and jobs, ensuring they are efficient, accurate, and meet business requirements Troubleshoot and debug ETL code, identifying and resolving issues in a timely manner Collaborate with cross-functional teams, including data analysts, business analysts, and project managers, to understand requirements and deliver solutions that meet business needs Design and implement data integration processes between various systems and data sources Optimize ETL processes to improve performance, scalability, and reliability Create and maintain technical documentation, including design documents, coding standards, and best practices. Technical Skills Skills Requirements: Proficiency in programming languages such as Python for writing ETL scripts. Knowledge of data transformation techniques such as filtering, aggregation, and joining. Familiarity with ETL frameworks such as Apache NiFi, Talend, or Informatica. Understanding of data profiling, data quality, and data validation techniques. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Key Responsibilities Design conformed star & snowflake schemas , implement SCD2 dimensions and fact tables. Lead Spark (PySpark/Scala) or AWSGlue ELT pipelines from RDSZeroETL/S3 into Redshift. Tune RA3 clusterssort/dist keys, WLM queues, Spectrum partitionsfor subsecond BI queries. Establish dataquality, lineage, and costgovernance dashboards using CloudWatch & Terraform/CDK. Collaborate with Product & Analytics to translate HR KPIs into selfservice data marts. Mentor junior engineers; drive documentation and coding standards. MustHave Skills AmazonRedshift (sort & dist keys, RA3, Spectrum) Spark on EMR/Glue (PySpark or Scala) Dimensional modelling (Kimball), star schema, SCD2 Advanced SQL + Python/Scala scripting AWS IAM, KMS, CloudWatch, Terraform/CDK, CI/CD (GitHub Actions or CodePipeline) NicetoHave dbt, Airflow, Kinesis/Kafka, LakeFormation rowlevel ACLs GDPR / SOC2 compliance exposure AWSDataAnalytics or SolutionsArchitect certification Education B.E./B.Tech in Computer Science, IT, or related field (Master’s preferred but not mandatory). Compensation & Benefits Competitive CTC 25–40 LPA Health insurance for self & dependents Why Join Us? Own a greenfield HR analytics platform with executive sponsorship. Modern AWS stack (RedshiftRA3, LakeFormation, EMRonEKS). Culture of autonomy, fast decisionmaking, and continuous learning. Application Process 30min technical screen 4hour takehome Spark/SQL challenge 90min architecture deep dive Panel interview (leadership & stakeholder communication)

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Egypt, Chennai, Bengaluru

Hybrid

We're Hiring: MLOps Engineer | Cairo, Egypt | Immediate Joiners Only Share CVs to vijay.s@xebia.com Location: Cairo, Egypt Experience: 6-8 Years Mode: Onsite Joining: Immediate or Max 2 Weeks Notice Relocation: Open to relocating to Egypt ASAP Job Summary: Xebia is seeking a seasoned MLOps Engineer to scale and operationalize ML solutions for our strategic client in Cairo. This is an onsite role , perfect for professionals who are ready to deploy cutting-edge ML pipelines in real-world enterprise environments. Key Responsibilities: Design & manage end-to-end scalable, reliable ML pipelines Build CI/CD pipelines with Azure DevOps Deploy and track ML models using MLflow Work on large-scale data with Cloudera/Hadoop (Hive, Spark, HDFS) Support Knowledge Graphs , metadata enrichment, model lineage Collaborate with DS & engineering teams to ensure governance and auditability Implement model performance monitoring, drift detection, and data quality checks Support DevOps automation aligned with enterprise-grade compliance standards Required Skills: 6-8 years in MLOps / Machine Learning Engineering Hands-on with MLflow , Azure DevOps , Python Deep experience with Cloudera , Hadoop , Spark , Hive Exposure to Knowledge Graphs , containerization (Docker/Kubernetes) Familiar with TensorFlow , scikit-learn , or PyTorch Understanding of data security, access controls, audit logging Preferred: Azure Certifications (e.g., Azure Data Engineer / AI Engineer Associate ) Experience with Apache NiFi , Airflow , or similar tools Background in regulated sectors like BFSI, Healthcare, or Pharma Soft Skills: Strong problem-solving & analytical thinking Clear communication & stakeholder engagement Passion for automation & continuous improvement Additional Information: Only apply if: You can join within 2 weeks or are an immediate joiner You're open to relocating to Cairo, Egypt ASAP You hold a valid passport Visa-on-arrival/B1/Schengen holders from MEA region preferred To Apply: Send your updated CV to vijay.s@xebia.com along with: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Cairo) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile Valid Passport No Be part of a global transformation journey make AI work at scale! #MLOps #Hiring #AzureDevOps #MLflow #CairoJobs #ImmediateJoiners #DataEngineering #Cloudera #Hadoop #XebiaCareers

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Gurugram

Work from Office

Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures . Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion , or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake . Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema , normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake : Architecture design, performance tuning, cost optimization. Strong proficiency in SQL , Python , and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion , or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms : AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks . Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI , or Looker . Certifications (Preferred/Required): Snowflake SnowPro Core Certification Required or Highly Preferred SnowPro Advanced Architect Certification – Preferred Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture

Posted 1 week ago

Apply

4.0 - 9.0 years

3 - 4 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Integration Consultant o9 Platform Locations: Bangalore | Pune | Hyderabad | Mumbai | PAN India Experience: 4 to 15 Years Key Responsibilities: Act as the Integration Consultant for o9 implementation projects Understand and design using o9 platform’s data models, pipelines, and structures Analyze customer data for quality, completeness & technical alignment Collaborate on technical design, data gathering & suggest optimizations Configure batch schedules for regular integrations Implement complete E2E integration from external systems to o9 platform Technical Skills Required: Strong in SQL, PySpark, Python, Spark SQL, and ETL tools Experience with SQL Server / Oracle (DDL, DML, Stored Procedures) Must have delivered one end-to-end o9 integration project Nice to Have: Knowledge of Airflow , Delta Lake, NiFi, Kafka Experience with API-based integrations Professional Attributes: Strong communication , problem-solving , and analytical abilities Ability to work independently & collaboratively Positive attitude and proactive approach Educational Qualifications: BE/BTech/MCA or Bachelor’s/Master’s degree in Computer Science or relevant fields Take the next big step in your tech career!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies