Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
4 - 8 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action
Posted 2 months ago
5.0 - 7.0 years
15 - 25 Lacs
Chennai
Work from Office
Job Summary: We are seeking a skilled Big Data Tester & Developer to design, develop, and validate data pipelines and applications on large-scale data platforms. You will work on data ingestion, transformation, and testing workflows using tools from the Hadoop ecosystem and modern data engineering stacks. Experience - 6-12 years Key Responsibilities: • Develop and test Big Data pipelines using Spark, Hive, Hadoop, and Kafka • Write and optimize PySpark/Scala code for data processing • Design test cases for data validation, quality, and integrity • Automate testing using Python/Java and tools like Apache Nifi, Airflow, or DBT • Collaborate with data engineers, analysts, and QA teams Key Skills: • Strong hands-on experience in Big Data tools: Spark, Hive, HDFS, Kafka • Proficient in PySpark, Scala, or Java • Experience in data testing, ETL validation, and data quality checks • Familiarity with SQL, NoSQL, and data lakes • Knowledge of CI/CD, Git, and automation frameworks We are looking for a skilled PostgreSQL Developer/DBA to design, implement, optimize, and maintain our PostgreSQL database systems. You will work closely with developers and data teams to ensure high performance, scalability, and data integrity. Experience - 6 to 12 years Key Responsibilities: • Develop complex SQL queries, stored procedures, and functions • Optimize query performance and database indexing • Manage backups, replication, and security • Monitor and tune database performance • Support schema design and data migrations Key Skills: • Strong hands-on experience with PostgreSQL • Proficient in SQL, PL/pgSQL scripting • Experience in performance tuning, query optimization, and indexing • Familiarity with logical replication, partitioning, and extensions • Exposure to tools like pgAdmin, psql, or PgBouncer
Posted 2 months ago
2.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
There is a need for a resource (proficient) for a Data Engineer with experience monitoring and fixing jobs for data pipelines written in Azure data Factory and Python Design and implement data models for Snowflake to support analytical solutions. Develop ETL processes to integrate data from various sources into Snowflake. Optimize data storage and query performance in Snowflake. Collaborate with cross-functional teams to gather requirements and deliver scalable data solutions. Monitor and maintain Snowflake environments, ensuring optimal performance and data security. Create documentation for data architecture, processes, and best practices. Provide support and training for teams utilizing Snowflake services. Roles and Responsibilities Strong experience with Snowflake architecture and data warehousing concepts. Proficiency in SQL for data querying and manipulation. Familiarity with ETL tools such as Talend, Informatica, or Apache NiFi. Experience with data modeling techniques and tools. Knowledge of cloud platforms, specifically AWS, Azure, or Google Cloud. Understanding of data governance and compliance requirements. Excellent analytical and problem-solving skills. Strong communication and collaboration skills to work effectively within a team. Experience with Python or Java for data pipeline development is a plus.
Posted 2 months ago
3.0 - 7.0 years
20 - 27 Lacs
Gurugram
Work from Office
The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Experience working on any Databricks would be added advantage Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Exposure to various ETL and Business Intelligence tools Experience in shell scripting to automate pipeline execution. Solid grounding in Agile methodologies Experience with git and other source control systems Strong communication and presentation skills Nice-to-have skills Certification in Hadoop/Big Data Hortonworks/Cloudera Databricks Spark certification Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Qualifications Tech./M.Tech./MS or BCA/MCA degree from a reputed university
Posted 2 months ago
15.0 - 20.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that enhance operational efficiency. Roles & Responsibilities:Play the integration consultant role on o9 implementation projects. Understand o9 platforms data model (table structures, linkages, pipelines, optimal designs) for designing various planning use cases. Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies. Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data. Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches. E2E integration implementation from partner system to o9 platform Technical Skills: Must have minimum 3 to 7 years of experience on SQL, PySpark, Python, Spark SQL and ETL tools. Proficiency in database (SQL Server, Oracle etc ).Knowledge of DDL, DML, stored procedures.Good to have experience in Airflow, Dalta Lake, Nifi, Kafka. At least one E2E integration implementation experience will be preferred. Any API based integration experience will be added advantageProfessional Skills: Proven ability to work creatively and analytically in a problem-solving environment.Proven ability to build, manage and foster a team-oriented environment.Excellent problem-solving skills with excellent communication written/oral, interpersonal skills.Strong collaborator- team player- and individual contributor. Educational QualificationBE/BTech/MCA/Bachelor's degree/masters degree in computer science and related fields of work are preferred. Additional Information:The candidate should have minimum 7.5 years of experience in O9 Solutions.This position is based in Pune.A 15 years full time education is required.Open to travel - short / long term Qualification 15 years full time education
Posted 2 months ago
3.0 - 5.0 years
9 - 13 Lacs
Pune
Work from Office
Job Title Big Data Tester About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer : Role: Support Development, and maintain automated test frameworks, tools, and test cases for Data Engineering and Data Warehouse applications. Collaborate with cross-functional teams, including software developers, data engineers, and data analysts, to ensure comprehensive testing coverage and adherence to quality standards. Conduct thorough testing of data pipelines, ETL processes, and data transformations using Big Data technologies. Apply your knowledge of Data Warehouse/Data Lake methodologies and best practices to validate the accuracy, completeness, and performance of our data storage and retrieval systems. Identify, document, and track software defects, working closely with the development team to ensure timely resolution. Participate in code reviews, design discussions, and quality assurance meetings to provide valuable insights and contribute to the overall improvement of our software products. Base Skill Requirements: Must Technical Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3-5 years of experience in software testing and development, with a focus on data-intensive applications. Proven experience in testing data pipelines and ETL processes - Test planning, Test Environment planning, End to End testing, Performance testing. Solid programming skills in Python - proven automation effort to bring efficiency in the test cycles. Solid understanding of Data models and SQL . Must have experience with ETL (Extract, Transform, Load) processes and tools (Scheduling and Orchestration tools, ETL Design understanding) Good understanding of Big Data technologies like Spark, Hive, and Impala. Understanding of Data Warehouse methodologies, applications, and processes. Experience working in an Agile/Scrum environment, with a solid understanding of user stories, acceptance criteria, and sprint cycles. Optional Technical Experience with scripting languages like Bash or Shell. Experience working with large-scale datasets and distributed data processing frameworks (e.g., Hadoop, Spark). Familiarity with data integration tools like Apache NiFi is a plus. Excellent problem-solving and debugging skills, with a keen eye for detail. Strong communication and collaboration skills to work effectively in a team-oriented environment. Eagerness to learn and contribute to a growing team.
Posted 2 months ago
2.0 - 5.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that enhance operational efficiency. Roles & Responsibilities:Play the integration consultant role on o9 implementation projects. Understand o9 platforms data model (table structures, linkages, pipelines, optimal designs) for designing various planning use cases. Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies. Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data. Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches. E2E integration implementation from partner system to o9 platform Technical Skills: Must have minimum 3 to 7 years of experience on SQL, PySpark, Python, Spark SQL and ETL tools. Proficiency in database (SQL Server, Oracle etc ).Knowledge of DDL, DML, stored procedures.Good to have experience in Airflow, Dalta Lake, Nifi, Kafka. At least one E2E integration implementation experience will be preferred. Any API based integration experience will be added advantageProfessional Skills: Proven ability to work creatively and analytically in a problem-solving environment.Proven ability to build, manage and foster a team-oriented environment.Excellent problem-solving skills with excellent communication written/oral, interpersonal skills.Strong collaborator- team player- and individual contributor. Educational QualificationBE/BTech/MCA/Bachelor's degree/masters degree in computer science and related fields of work are preferred. Additional Information:The candidate should have minimum 7.5 years of experience in O9 Solutions.This position is based in Pune.A 15 years full time education is required.Open to travel - short / long term Qualification 15 years full time education
Posted 2 months ago
8.0 - 13.0 years
25 - 35 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
We are seeking a highly skilled ETL Architect Powered by AI (Apache NiFi/Kafka) to join our team. The ideal candidate will have expertise in managing, automating, and orchestrating data flows using Apache NiFi. In this role, you will design, implement, and maintain scalable data pipelines that handle real-time and batch data processing. The role also involves integrating NiFi with various data sources, performing data transformation tasks, and ensuring data quality and governance Key Responsibilities: Real-Time Data Integration (Apache NiFi & Kafka): Design, develop, and implement real-time data pipelines leveraging Apache NiFi for seamless data flow. Build and maintain Kafka producers and consumers for effective streaming data management across systems. Ensure the scalability, reliability, and performance of data streaming platforms using NiFi and Kafka. Monitor, troubleshoot, and optimize data flow within Apache NiFi and Kafka clusters. Manage schema evolution and support data serialization formats such as Avro , JSON , and Protobuf . Set up, configure, and optimize Kafka topics, partitions, and brokers for high availability and fault tolerance. Implement backpressure handling, prioritization, and flow control strategies in NiFi data flows. Integrate NiFi flows with external services (e.g., REST APIs , HDFS , RDBMS ) for efficient data movement. Establish and maintain secure data transmission, access controls, and encryption mechanisms in NiFi and Kafka environments. Develop and maintain batch ETL pipelines using tools like Informatica , Talend , and custom Python/SQL scripts . Continuously optimize and refactor existing ETL workflows to improve performance, scalability, and fault tolerance. Implement job scheduling, error handling, and detailed logging mechanisms for data pipelines. Conduct data quality assessments and design frameworks to ensure high-quality data integration. Design and document both high-level and low-level data architectures for real-time and batch processing. Lead technical evaluations of emerging tools and platforms for potential adoption into existing systems. Qualifications we seek in you: Minimum Qualifications / Skills: Bachelors degree in computer science , Information Technology , or a related field. Significant experience in IT with a focus on data architecture and engineering . Proven experience in technical leadership , driving data integration projects and initiatives. Certifications in relevant technologies (e.g., AWS Certified Solutions Architect , Microsoft Certified: Azure Data Engineer ) are a plus. Strong analytical skills and the ability to translate business requirements into effective technical solutions. Proficiency in communicating complex technical concepts to non-technical stakeholders. Preferred Qualifications / Skills: Extensive hands-on experience as a Data Architect . In-depth experience with Apache NiFi , Apache Kafka , and related ecosystem components (e.g., Kafka Streams , Schema Registry ). Ability to develop and optimize NiFi processors to handle various data sources and formats. Proficient in creating reusable NiFi templates for common data flows and transformations. Familiarity with integrating NiFi and Kafka with big data technologies like Hadoop , Spark , and Databricks . At least 2 end-to-end implementations of data integration solutions in a real-world environment. Experience in metadata management frameworks and scalable data ingestion processes. Solid understanding of data platform design patterns and best practices for integrating real-time data systems. Knowledge of ETL processes , data integration tools, and data modeling techniques. Demonstrated experience in Master Data Management (MDM) and data privacy standards . Experience with modern data platforms such as Snowflake , Databricks , and big data tools. Proven ability to troubleshoot complex data issues and implement effective solutions . Strong project management skills with the ability to lead data initiatives from concept to delivery. Familiarity with AI/ML frameworks and their integration with data platforms is a plus. Excellent communication and interpersonal skills , with the ability to collaborate effectively across cross-functional teams . Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 months ago
3.0 - 5.0 years
15 - 20 Lacs
Bengaluru
Work from Office
locationsIndia, Bangalore time typeFull time posted onPosted 12 Days Ago job requisition idJR0273871 Job Details: About The Role : About the Role: Join our innovative and inclusive Logic Technology Development team as a TD AI and Analytics Engineer, where diverse talents come together to push the boundaries of semiconductor technology. You will have the opportunity to work in one of the world's most advanced cleanroom facilities, designing, executing, and analyzing experiments to meet engineering specifications for our cutting-edge processes. This role offers a unique chance to learn and operate a manufacturing line, integrating the many individual steps necessary for the production of complex microprocessors. What We Offer: We are dedicated to creating a collaborative, supportive, and exciting environment where diverse perspectives drive exceptional results. At Intel, you will have the opportunity to transform technology and contribute to a better future by delivering innovative products. Learn more about Intel Corporation's Core Values here. Benefits: We offer a comprehensive benefits package designed to support a healthy and fulfilling life. This includes excellent medical plans, wellness programs, recreational activities, generous time off, discounts on various products and services, and many more creative rewards that make Intel a great place to work. Discover more about our amazing benefits here. About the Logic Technology Development (LTD) TD Intel Foundry AI and Analytics Innovation Organization: Intel Foundry TD's AI and Analytics Innovation office is committed to providing a competitive advantage through End-to-End AI and Analytics Solutions, driving Intel's ambitious IDM2.0 goals. Our team is seeking an engineer with a background in Data Engineering, Software Engineering, or Data Science to support and develop modern AI/ML solutions. Explore what life is like inside Intel here. Key Responsibilities: As an Engineer in the TD AI office, you will collaborate with Intel's factory automation organization and Foundry TD's functional areas to support and develop modern AI/ML solutions. Your primary responsibilities will include. Developing software and data engineering solutions for in-house AI/ML products. Enhancing existing ML platforms and devising MLOps capabilities. Understanding existing data structures in factory automation systems and building data pipelines connecting different systems. Testing and supporting full-stack big data engineering systems. Developing data ingestion pipelines, data access APIs, and services, monitoring and maintaining deployment environments and platforms, creating technical documentation, and collaborating with peers/engineering teams to streamline solution development, validation, and deployment. Managing factory big data interaction with cloud environments, ORACLE, SQL, Python, Software architecture, and MLOps. Interfacing with process and integration functional area analytics teams and customers using advanced automated process control systems. Qualifications: Minimum Qualifications: Master's or PhD degree in Computer Science, Computer Engineering, or a related Science/Engineering discipline. 3+ years of experience in data engineering/software development and knowledge in Spark, NiFi, Hadoop, HBase, S3 object storage, Kubernetes, REST APIs, and services. Intermediate to advanced English proficiency (both verbal and written). Preferred Qualifications: 2+ years in data analytics and machine learning (Python, R, JMP, etc.) and relational databases (SQL). 2+ years in a technical leadership role. 3+ months of working knowledge with CI/CD (Continuous Integration/Continuous Deployment) and proficiency with GitHub and GitHub Actions. Prior interaction with factory automation systems. Application Process :By applying to this posting, your resume and profile will become visible to Intel recruiters, allowing them to consider you for current and future job openings aligned with the skills and positions mentioned above. We are constantly working towards a more connected and intelligent future, and we need your help. Change tomorrow. Start today. Job Type: Experienced Hire Shift: Shift 1 (India) Primary Location: India, Bangalore Additional Locations: Business group: As the world's largest chip manufacturer, Intel strives to make every facet of semiconductor manufacturing state-of-the-art -- from semiconductor process development and manufacturing, through yield improvement to packaging, final test and optimization, and world class Supply Chain and facilities support. Employees in theTechnology Development and Manufacturing Groupare part of a worldwide network of design, development, manufacturing, and assembly/test facilities, all focused on utilizing the power of Moores Law to bring smart, connected devices to every person on Earth. Posting Statement: All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Position of Trust N/A Work Model for this Role This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. *
Posted 2 months ago
1.0 - 4.0 years
1 - 5 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 2 months ago
3.0 - 6.0 years
9 - 14 Lacs
Mumbai
Work from Office
Role Overview : We are looking for aTalend Data Catalog Specialistto drive enterprise data governance initiatives by implementingTalend Data Catalogand integrating it withApache Atlasfor unified metadata management within a Cloudera-based data lakehouse. The role involves establishing metadata lineage, glossary harmonization, and governance policies to enhance trust, discovery, and compliance across the data ecosystem Key Responsibilities: o Set up and configure Talend Data Catalog to ingest and manage metadata from source systems, data lake (HDFS), Iceberg tables, Hive metastore, and external data sources. o Develop and maintain business glossaries , data classifications, and metadata models. o Design and implement bi-directional integration between Talend Data Catalog and Apache Atlas to enable metadata synchronization , lineage capture, and policy alignment across the Cloudera stack. o Map technical metadata from Hive/Impala to business metadata defined in Talend. o Capture end-to-end lineage of data pipelines (e.g., from ingestion in PySpark to consumption in BI tools) using Talend and Atlas. o Provide impact analysis for schema changes, data transformations, and governance rule enforcement. o Support definition and rollout of enterprise data governance policies (e.g., ownership, stewardship, access control). o Enable role-based metadata access , tagging, and data sensitivity classification. o Work with data owners, stewards, and architects to ensure data assets are well-documented, governed, and discoverable. o Provide training to users on leveraging the catalog for search, understanding, and reuse. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6–12 years in data governance or metadata management, with at least 2–3 years in Talend Data Catalog. Talend Data Catalog, Apache Atlas, Cloudera CDP, Hive/Impala, Spark, HDFS, SQL. Business glossary, metadata enrichment, lineage tracking, stewardship workflows. Hands-on experience in Talend–Atlas integration , either through REST APIs, Kafka hooks, or metadata bridges. Preferred technical and professional experience .
Posted 2 months ago
3.0 - 7.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Preferred technical and professional experience Good communication skill. India market experience is preferred.
Posted 2 months ago
3.0 - 5.0 years
8 - 12 Lacs
Gurugram, Delhi
Work from Office
Role Description This is a full-time hybrid role for an Apache Nifi Developer based in Gurugram with some work-from-home options. The Apache Nifi Developer will be responsible for designing, developing, and maintaining data workflows and pipelines. The role includes programming, implementing backend web development solutions, using object-oriented programming (OOP) principles, and collaborating with team members to enhance software solutions. Qualifications Knowledge of Apache Nifi and experience in programming Skills in Back-End Web Development and Software Development Data Pipeline Strong understanding of APACHE NIFI Background in Computer Science Excellent problem-solving and analytical skills Ability to work in a hybrid environment Experience in AI and Blockchain is a plus Bachelor's degree in Computer Science or related field
Posted 2 months ago
2.0 - 4.0 years
3 - 8 Lacs
Coimbatore
Work from Office
We are looking for an experienced ETL Developer to join our team. The ideal candidate will have strong experience in ETL tools and processes, particularly with Talend, Informatica, Apache Nifi, Pentaho, or SSIS. The role requires excellent technical knowledge of databases, particularly MySQL, and a strong ability to integrate data from multiple sources. The candidate must also have strong manual testing skills and experience using version control systems such as GIT, along with project tracking tools like JIRA. Key Responsibilities: Design, develop, and implement ETL processes using tools like Talend, Informatica, Apache Nifi, Pentaho, or SSIS. Develop and maintain data pipelines to integrate data from various sources including APIs, cloud storage, and third-party applications. Perform data mapping, data transformation, and data cleansing to ensure data quality. Write complex SQL queries for data extraction, transformation, and loading from MySQL databases. Collaborate with cross-functional teams to understand data requirements and provide scalable solutions. Conduct manual testing to ensure the accuracy and performance of ETL processes and data. Manage version control with GIT, and track project progress in JIRA. Troubleshoot and resolve issues related to ETL processes, data integration, and testing. Ensure adherence to best practices for ETL design, testing, and documentation. Required Skills: 2.5 to 4 years of experience in ETL development with tools such as Talend, Informatica, Apache Nifi, Pentaho, or SSIS. Strong hands-on experience with MySQL databases. Proven ability to integrate data from diverse sources (APIs, cloud storage, third-party apps). Solid manual testing experience, with a focus on ensuring data accuracy and process integrity. Familiarity with GIT for version control and JIRA for project management. Strong problem-solving skills with the ability to troubleshoot and resolve technical issues. Excellent communication skills and the ability to collaborate with cross-functional teams. Preferred Skills: Experience working with cloud platforms such as AWS, Azure, or GCP. Knowledge of automation frameworks for testing ETL processes. Qualifications: Bachelors degree in computer science, Information Technology, or a related field (or equivalent work experience).
Posted 2 months ago
12.0 - 22.0 years
25 - 40 Lacs
Bangalore Rural, Bengaluru
Work from Office
Role & responsibilities Requirements: Data Modeling (Conceptual, Logical, Physical)- Minimum 5 years Database Technologies (SQL Server, Oracle, PostgreSQL, NoSQL)- Minimum 5 years Cloud Platforms (AWS, Azure, GCP) - Minimum 3 Years ETL Tools (Informatica, Talend, Apache Nifi) - Minimum 3 Years Big Data Technologies (Hadoop, Spark, Kafka) - Minimum 5 Years Data Governance & Compliance (GDPR, HIPAA) - Minimum 3 years Master Data Management (MDM) - Minimum 3 years Data Warehousing (Snowflake, Redshift, BigQuery)- Minimum 3 years API Integration & Data Pipelines - Good to have. Performance Tuning & Optimization - Minimum 3 years business Intelligence (Power BI, Tableau)- Minimum 3 years Job Description: We are seeking experienced Data Architects to design and implement enterprise data solutions, ensuring data governance, quality, and advanced analytics capabilities. The ideal candidate will have expertise in defining data policies, managing metadata, and leading data migrations from legacy systems to Microsoft Fabric/DataBricks/ . Experience and deep knowledge about at least one of these 3 platforms is critical. Additionally, they will play a key role in identifying use cases for advanced analytics and developing machine learning models to drive business insights. Key Responsibilities: 1. Data Governance & Management Establish and maintain a Data Usage Hierarchy to ensure structured data access. Define data policies, standards, and governance frameworks to ensure consistency and compliance. Implement Data Quality Management practices to improve accuracy, completeness, and reliability. Oversee Metadata and Master Data Management (MDM) to enable seamless data integration across platforms. 2. Data Architecture & Migration Lead the migration of data systems from legacy infrastructure to Microsoft Fabric. Design scalable, high-performance data architectures that support business intelligence and analytics. Collaborate with IT and engineering teams to ensure efficient data pipeline development. 3. Advanced Analytics & Machine Learning Identify and define use cases for advanced analytics that align with business objectives. Design and develop machine learning models to drive data-driven decision-making. Work with data scientists to operationalize ML models and ensure real-world applicability. Required Qualifications: Proven experience as a Data Architect or similar role in data management and analytics. Strong knowledge of data governance frameworks, data quality management, and metadata management. Hands-on experience with Microsoft Fabric and data migration from legacy systems. Expertise in advanced analytics, machine learning models, and AI-driven insights. Familiarity with data modelling, ETL processes, and cloud-based data solutions (Azure, AWS, or GCP). Strong communication skills with the ability to translate complex data concepts into business insights. Preferred candidate profile Immediate Joiner
Posted 2 months ago
4.0 - 9.0 years
3 - 8 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Role & responsibilities Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action Preferred candidate profile Immediate Joiner
Posted 2 months ago
2 - 5 years
2 - 5 Lacs
Bengaluru
Work from Office
Databricks Engineer Full-time DepartmentDigital, Data and Cloud Company Description Version 1 has celebrated over 26+ years in Technology Services and continues to be trusted by global brands to deliver solutions that drive customer success. Version 1 has several strategic technology partners including Microsoft, AWS, Oracle, Red Hat, OutSystems and Snowflake. Were also an award-winning employer reflecting how employees are at the heart of Version 1. Weve been awardedInnovation Partner of the Year Winner 2023 Oracle EMEA Partner Awards, Global Microsoft Modernising Applications Partner of the Year Award 2023, AWS Collaboration Partner of the Year - EMEA 2023 and Best Workplaces for Women by Great Place To Work in UK and Ireland 2023. As a consultancy and service provider, Version 1 is a digital-first environment and we do things differently. Were focused on our core values; using these weve seen significant growth across our practices and our Digital, Data and Cloud team is preparing for the next phase of expansion. This creates new opportunities for driven and skilled individuals to join one of the fastest-growing consultancies globally. About The Role This is an exciting opportunity for an experienced developer of large-scale data solutions. You will join a team delivering a transformative cloud hosted data platform for a key Version 1 customer. The ideal candidate will have a proven track record as a senior/self-starting data engineerin implementing data ingestion and transformation pipelines for large scale organisations. We are seeking someone with deep technical skills in a variety of technologies, specifically SPARK performanceuning\optimisation and Databricks , to play an important role in developing and delivering early proofs of concept and production implementation. You will ideally haveexperience in building solutions using a variety of open source tools & Microsoft Azure services, and a proven track record in delivering high quality work to tight deadlines. Your main responsibilities will be: Designing and implementing highly performant metadata driven data ingestion & transformation pipelines from multiple sources using Databricks and Spark Streaming and Batch processes in Databricks SPARK performanceuning\optimisation Providing technical guidance for complex geospatial problems and spark dataframes Developing scalable and re-usable frameworks for ingestion and transformation of large data sets Data quality system and process design and implementation. Integrating the end to end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times Working with other members of the project team to support delivery of additional project components (Reporting tools, API interfaces, Search) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Qualifications Direct experience of building data piplines using Azure Data Factory and Databricks Experience Required is 6 to 8 years. Building data integration with Python Databrick Engineer certification Microsoft Azure Data Engineer certification. Hands on experience designing and delivering solutions using the Azure Data Analytics platform. Experience building data warehouse solutions using ETL / ELT tools like Informatica, Talend. Comprehensive understanding of data management best practices including demonstrated experience with data profiling, sourcing, and cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching. Nice to have Experience working in a Dev/Ops environment with tools such as Microsoft Visual Studio Team Services, Chef, Puppet or Terraform Experience working with structured and unstructured data including imaging & geospatial data. Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience with Azure Event Hub, IOT Hub, Apache Kafka, Nifi for use with streaming data / event-based data Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We also offer a range of tech-related benefits, including an innovative Tech Scheme to help keep our team members up-to-date with the latest technology. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth. Cookies Settings
Posted 2 months ago
5 - 8 years
10 - 14 Lacs
Chennai
Work from Office
Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution ? Please find the below JD Exp5-8 Years Good understanding of DWH GCP(Google Cloud Platform) BigQuery knowledge Knowledge of GCP Storage GCP Workflows and Functions Python CDC Extractor Tools like(Qlik/Nifi) BI Knowledge(like Power BI or looker) ? 2. Skill upgradation and competency building Clear wipro exams and internal certifications from time to time to upgrade the skills Attend trainings, seminars to sharpen the knowledge in functional/ technical domain Write papers, articles, case studies and publish them on the intranet ? Deliver No. Performance Parameter Measure 1. Contribution to customer projects Quality, SLA, ETA, no. of tickets resolved, problem solved, # of change requests implemented, zero customer escalation, CSAT 2. Automation Process optimization, reduction in process/ steps, reduction in no. of tickets raised 3. Skill upgradation # of trainings & certifications completed, # of papers, articles written in a quarter ? Mandatory Skills: Cloud-PaaS-GCP-Google Cloud Platform. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
5 - 6 years
7 - 8 Lacs
Gurugram
Work from Office
Site Reliability Engineer Job Description: Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred) Familiarity with DataHub, DataMesh, and security best practices is a plus Strong problem-solving and debugging mindset Ability to work under pressure in a fast-paced environment. Excellent communication and collaboration skills. Ownership, customer orientation, and a bias for action
Posted 2 months ago
16 - 21 years
40 - 45 Lacs
Gurugram
Work from Office
The Role: Enterprise Architect - Integration The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. Whats in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities: The role shall be responsible for establishing, maintaining, socialising and realising the target state integration strategy for FX & Securities Post trade businesses of Osttra. This shall encompass the post trade lifecycle of our businesses including connectivity with clients, markets ecosystem and Osttras post trade family of networks and platforms and products. The role shall partner with product architects, product managers, delivery heads and teams for refactoring the deliveries towards the target state. They shall be responsible for the efficiency, optimisation, oversight and troubleshooting of current day integration solutions, platforms and deliveries as well, in addition target state focus. The role shall be expected to produce and maintain integration architecture blueprint. This shall cover current state and propose a rationalised view of target state of end-to-end integration flows and patterns. The role shall also provide for and enable the needed technology platforms/tools and engineering methods to realise the strategy. The role enable standardisation of protocols / formats (at least within Osttra world) , tools and reduce the duplication & non differentiated heavy lift in systems. The role shall enable the documentation of flows & capture of standard message models. Integration strategy shall also include transformation strategy which is so vital in a multi-lateral / party / system post trade world. Role shall partner with other architects and strategies / programmes and enable the demands of UI, application, and data strategies. What Were Looking For: Rich domain experience of financial services industry preferably with financial markets, Pre/post trade life cycles and large-scale Buy/Sell/Brokerage organisations Should have experience of leading the integration strategies and delivering the integration design and architecture for complex programmes and financial enterprises catering to key variances of latency / throughput. Experience with API Management platforms (like AWS API Gateway, Apigee, Kong, MuleSoft Anypoint) and key management concepts (API lifecycle management, versioning strategies, developer portals, rate limiting, policy enforcement) Should be adept with integration & transformation methods, technologies and tools. Should have experience of domain modelling for messages / events / streams and APIs. Rich experience of architectural patterns like Event driven architectures, micro services, event streaming, Message processing/orchestrations, CQRS, Event sourcing etc. Experience of protocols or integration technologies like FIX, Swift, MQ, FTP, API etc. .. including knowledge of authentication patterns (OAuth, mTLS, JWT, API Keys), authorization mechanisms, data encryption (in transit and at rest), secrets management, and security best practices Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, Protobuf, REST, gRPC, GraphQL etc Experience of technology like Kafka or AWS Kinesis, Spark streams, Kubernetes / EKS, AWS EMR Experience of languages like Java, python and message orchestration frameworks like Apache Camel, Apache Nifi, AWS Step Functions etc. Experience in designing and implementing traceability/observability strategies for integration systems and familiarity with relevant framework tooling. Experience of engineering methods like CI/CD, build deploy automation, Infra as code and integration testing methods and tools Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience and strong understanding of multicloud integration patterns.
Posted 2 months ago
4 - 8 years
25 - 30 Lacs
Pune
Hybrid
So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 4-7 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Hyderabad
Work from Office
What is Blend Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com What is the Role? We are seeking a highly skilled Lead Data Engineer to join our data engineering team for an on-premise environment. A large portion of your time will be in the weeds working alongside your team architecture, designing, implementing, and optimizing data solutions. The ideal candidate will have extensive experience in building and optimizing data pipelines, architectures, and data sets, with a strong focus on Python, SQL, Hadoop, HDFS, and Apache NiFi. What youll be doing? Design, develop, and maintain robust, scalable, and high-performance data pipelines and data integration solutions. Manage and optimize data storage in Hadoop Distributed File System (HDFS). Design and implement data workflows using Apache NiFi for data ingestion, transformation, and distribution. Collaborate with cross-functional teams to understand data requirements and deliver efficient solutions. Ensure data quality, governance, and security standards are met within the on-premise infrastructure. Monitor and troubleshoot data pipelines to ensure optimal performance and reliability. Automate data workflows and processes to enhance system efficiency. What do we need from you? Bachelor’s degree in computer science, Software Engineering, or a related field 6+ years of experience in data engineering or a related field Strong programming skills in Python and SQL. Hands-on experience with Hadoop ecosystem (HDFS, Hive, etc.). Proficiency in Apache NiFi for data ingestion and flow orchestration. Experience in data modeling, ETL development, and data warehousing concepts. Strong problem-solving skills and ability to work independently in a fast-paced environment. Good understanding of data governance, data security, and best practices in on-premise environments. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications .
Posted 2 months ago
2 - 4 years
12 - 22 Lacs
Bengaluru
Work from Office
About Lowes Lowes Companies, Inc. (NYSE: LOW) is a FORTUNE 50 home improvement company serving approximately 16 million customer transactions a week in the United States. With total fiscal year 2024 sales of more than $83 billion, Lowes operates over 1,700 home improvement stores and employs approximately 300,000 associates. Based in Mooresville, N.C., Lowes supports the communities it serves through programs focused on creating safe, affordable housing, improving community spaces, helping to develop the next generation of skilled trade experts and providing disaster relief to communities in need. For more information, visit Lowes.com. Job Summary The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver code modules, stable application systems, and software solutions, maintain them and monitor. This includes developing, configuring, modifying, maintaining, monitoring integrated business and/or enterprise application solutions within various computing environments. This role facilitates the implementation and maintenance of business and enterprise software solutions to ensure successful deployment of released applications. Roles & Responsibilities Core Responsibilities 1. Design and develop applications, this will include both backend and frontend development. 2. Development and maintenance of microservices, standalone applications, libraries etc. 3. Will have to work on cloud platform which includes development, deployment and monitoring. 4. Will have to work on database when needed 5. Should be ready to give all the support that the application needs once its in production, including being on call during the week or weekend as needed by the project. 6. Debug production issues and come up with multiple solutions and have the ability to choose the best possible solution 7.Should be well versed with documentation (including different UML diagrams) 8. Ready to work in hybrid model (scrum + Kanban) 9. Should focus on quality and time to market. 10. Should be very proactive and ready to work on any given task 11. Must do multitasking and should be quick to adopt the changes in business requirement. 12. Should be able to provide out of the box solutions for a problem. 13. Should be able to communicate effectively within the and outside team. 14. Should be aligned with the team and be a good team player. Years of Experience Minimum 2+ years experience in Software Development and Maintenance (SDLC) Education Qualification & Certifications • Bachelor's/masters degree in computer science, CIS, or related field (or equivalent work experience in a related field). • Minimum of 2+ years of experience in software development and maintenance. • Minimum of 2+ years of experience in database technologies. • Minimum of 2+ years of experience working with defect or incident tracking software • Minimum of 2+ year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC). • Minimum of 2+ years of experience with technical documentation in a software development environment • Working experience with application coding/testing/debugging and networking • Working experience with database technologies. Primary Skills (Must Have) 1.Java, Reactive programming, Spring boot, Node JS, Microservices. 2.Apache Pyspark, Python 3.Data Pipeline using Apache NiFi Framework. 4.PostGres Database, SQL,JPA 5.Cloud based development and deployments, Kubernetes, Docker, Prometius, 6.Basic network configuration knowledge, linux, different application servers, 7.GIT, Bitbucket, Splunk, Kibana, JIRA, Confluence. Secondary Skills (Desired) 1.Working experience on front end REACT technologies, GCP, Jenkins, linux scripting. 2.Any certification is an added advantage.
Posted 2 months ago
7 - 9 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NA Minimum 7 year(s) of experience is required Educational Qualification : BE/BTech/MCA/Bachelor's degree/master's degree in computer science and related fields of work are preferred. Role:Application Developer Project Role Description:Design, build and configure applications to meet business process and application requirements. Must have Skills:[object Object] Good to Have Skills:[object Object] Job Requirements:'',//?field Key Responsibilities:Play the integration consultant role on o9 implementation projects. Understand o9 platform's data model (table structures, linkages, optimal designs) for designing various planning use cases. Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies. Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data. Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches. E2E integration implementation from partner system to O9 platform Technical Experience:Minimum 3 to 7 years of experience in SQL/PLSQL, SSIS. Proficiency in databases (SQL Server, MySQL) knowledge of DDL, DML, stored procedures, SSMS, o9 DB designer, o9 Batch Orchestrator. At least one E2E integration implementation from partner system to o9 will be preferred. Any API based integration experience will be added advantage. Good to have experience in Kafka, Nifi ,PySpark, Python Professional Attributes:Proven ability to work creatively and analytically in a problem-solving environment Proven ability to build, manage and foster a team-oriented environment Excellent problem-solving skills with excellent communication written/oral, interpersonal skills Strong collaborator- team player- and IC Educational Qualification:BE/BTech/MCA/Bachelor's degree/master's degree in computer science and related fields of work are preferred. Additional Info:Open to travel - short / long term Qualification BE/BTech/MCA/Bachelor's degree/master's degree in computer science and related fields of work are preferred.
Posted 2 months ago
3 - 7 years
9 - 13 Lacs
Hyderabad
Work from Office
About The Role #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now Senior Software Developer Job Location (Short): Hyderabad, India Workplace Type: Hybrid Business Unit: ALI Req Id: 1492 .buttontextb0d7f9bdde9da229 a{ border1px solid transparent; } .buttontextb0d7f9bdde9da229 a:focus{ border1px dashed #5B94FF !important; outlinenone !important; } Responsibilities Our product team is responsible for designing, developing, testing, deploying, and managing HxGN Databridge Pro (DBPro) product. DBPro powered by Apache NiFi, provides a cloud-based, multi-tenant platform to build integrations for HxGN EAM customer engagements, integration to Xalt and other Hexagon products. Hexagon is seeking a highly motivated software developer to join the team and works on the product implementations of new features. This person is responsible for designing, writing, executing, testing, and maintaining the application. They will be working with a team of developers along with architects and QA analyst to ensure the quality of the product. Our application is a multi-tenant AWS cloud-based application. This is an exciting time for our team as our application is being used as the company standard product for moving data between Hexagon applications. You will contribute by partnering with solution architects and implementation teams to ensure clean code is released for production. A Day in The Life Typically Includes: Collaborate with manager, business analyst and other developers to clarify and finalize requirements and produce corresponding functional specifications for general applications and infrastructure Work with other software developers and architects to design and implement applications using Java code and enhancements as needed Maintain and enhance applications on an ongoing basis per user/customer feedback Ensure that unit and system tests are automated, per quality assurance requirements Collaborate as necessary to define and implement regression test suites Optimize performance and scalability as necessary to meet business goals of application and environment Works under limited supervision May be required to work extended hours to meet project timelines as needed Education / Qualifications Bachelor of Science in Computer Science or equivalent work experience Minimum of 3 years of Java coding experience for technologies in a fast-paced environment Strong object-oriented software systems design and architectural skills Experience in the following areas. Experience with JDK 1.8 and up (Java 11 preferred), SpringBoot, Maven, Git, REST API principles, JSON, and mapping frameworks Experience and understanding in designing and developing software while applying design patterns and object-oriented principles Experience in unit testing – Junit, assertion and mocking frameworks Knowledge of Angular 1.x, JavaScript, HTML , CSS, and JQuery Experience using Agile development methodologies. Experience with all phases of the software development life cycle Exposure and working knowledge of the following areas. Configuration Management tools such as Git and Maven Docker containers Works with limited supervision Flexibility and willingness to pitch in where needed. Ability to deliver results, prioritize activities, and to manage time effectively Communicates in English effectively (both written and verbally) What Will Put You Ahead / Preferred Qualifications: Experience in working and testing enterprise web applications in a cloud environment such as AWS Experience in databases technologies and writing optimum queries Experience working with Angular 1.x, JavaScript , HTML , CSS, and JQuery Knowledge of Kubernetes Experience using Terraform for deployments Experience writing Python scripts #LI-VBP1#LI-Hybrid About Hexagon Hexagon is a global leader in digital reality solutions, combining sensor, software and autonomous technologies. We are putting data to work to boost efficiency, productivity, quality and safety across industrial, manufacturing, infrastructure, public sector, and mobility applications. Hexagon’s Asset Lifecycle Intelligence division helps clients design, construct, and operate more profitable, safe, and sustainable industrial facilities. We empower customers to unlock data, accelerate industrial project modernization and digital maturity, increase productivity, and move the sustainability needle. Our technologies help produce actionable insights that enable better decision-making and intelligence across the asset lifecycle of industrial projects, leading to improvements in safety, quality, efficiency, and productivity, which contribute to Economic and Environmental Sustainability. Hexagon (Nasdaq StockholmHEXA B) has approximately 25,000 employees in 50 countries and net sales of approximately 5.4bn EUR. Learn more at hexagon.com and follow us @HexagonAB. Why work for Hexagon? At Hexagon, if you can see it, you can do it. Hexagon’s Asset Lifecyle Intelligence division puts their trust in you so that you can bring your ideas to life. We have emerged as one of the most engaged and enabled workplaces*. We are committed to creating an environment that is truly supportive by providing the resources you need to fully support your ambitions, no matter who you are or where you are in the world. * In the recently concluded workplace effectiveness survey by Korn Ferry, a global HR advisory firm, Hexagon, Asset Lifecycle Intelligence division has emerged as one of the most Engaged and Enabled workplaces, when compared to similar organizations that Korn Ferry partners with. Everyone is welcome At Hexagon, we believe that diverse and inclusive teams are critical to the success of our people and our business. Everyone is welcome—as an inclusive workplace, we do not discriminate. In fact, we embrace differences and are fully committed to creating equal opportunities, an inclusive environment, and fairness for all. Respect is the cornerstone of how we operate, so speak up and be yourself. You are valued here. .buttontext1c1d8f096aaf95bf a{ border1px solid transparent; } .buttontext1c1d8f096aaf95bf a:focus{ border1px dashed #0097ba !important; outlinenone !important; } #body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{color:rgb(0,0,0) !important;}#body.unify div.unify-button-container .unify-apply-now:focus, #body.unify div.unify-button-container .unify-apply-now:hover{background:rgba(230,231,232,1.0) !important;} Apply now
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough