Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2 - 5 years
14 - 17 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualifications Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 3 months ago
10 - 14 years
12 - 16 Lacs
Pune
Work from Office
Client expectation apart from JD Longer AWS data engineering experience (glue, spark, ECR ECS docker), python, pyspark, hudi/iceberg/Terraform, Kafka. Java in early career would be a great addition but not a prio. (for the OOP part and java connectors).
Posted 3 months ago
6 - 8 years
8 - 12 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 months ago
4 - 9 years
5 - 15 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
About Client Hiring for One of Our Multinational Corporations! Job Title: Data Engineer (Scala, Spark, Hadoop) Developer Location: Bangalore Job Type: Full Time WORK FROM OFFICE Job Summary: We are seeking a talented and motivated Data Engineer with strong expertise in Scala , Apache Spark , and Hadoop to join our growing team. As a Data Engineer, you will be responsible for building, optimizing, and maintaining scalable data pipelines, data processing systems, and data storage solutions. The ideal candidate will be passionate about working with big data technologies and developing innovative solutions for processing and analyzing large datasets. Key Responsibilities: Design, develop, and implement robust data processing pipelines using Scala , Apache Spark , and Hadoop frameworks. Develop ETL processes to extract, transform, and load large volumes of structured and unstructured data into data lakes and data warehouses. Work with large datasets to optimize performance, scalability, and data quality. Collaborate with cross-functional teams, including Data Scientists, Analysts, and DevOps, to deliver end-to-end data solutions. Ensure data processing workflows are automated, monitored, and optimized for efficiency and cost-effectiveness. Troubleshoot and resolve data issues, ensuring data integrity and quality. Work on the integration of various data sources into the Hadoop ecosystem and ensure effective data management. Develop and implement best practices for coding, testing, and deployment of data processing pipelines. Document and maintain clear and comprehensive technical documentation for data engineering processes and systems. Stay up-to-date with the latest industry trends, tools, and technologies in the data engineering and big data ecosystem. Required Skills and Qualifications: Proven experience as a Data Engineer, Data Developer, or similar role working with Scala , Apache Spark , and Hadoop . Strong knowledge of big data processing frameworks such as Apache Spark, Hadoop, HDFS, and MapReduce. Experience with distributed computing and parallel processing techniques. Solid experience with ETL processes and working with relational and NoSQL databases (e.g., MySQL, MongoDB, Cassandra, etc.). Proficiency in SQL for querying large datasets. Strong experience with data storage technologies such as HDFS , Hive , HBase , or Parquet . Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data-related services (e.g., S3, Redshift, BigQuery). Experience with workflow orchestration tools such as Airflow , Oozie , or Luigi . Knowledge of data warehousing , data lakes , and data integration patterns. Familiarity with version control tools such as Git and CI/CD pipelines. Strong problem-solving skills and the ability to debug complex issues. Excellent communication skills and the ability to collaborate with different teams. Preferred Skills: Experience with streaming data technologies like Kafka , Flink , or Kinesis . Familiarity with data visualization tools (e.g., Tableau, Power BI) and reporting. Knowledge of machine learning models and working with Data Science teams. Experience working in an Agile/Scrum environment. Degree in Computer Science, Engineering, Mathematics, or a related field. Why Join Us? Be a part of an innovative and dynamic team working on cutting-edge data engineering technologies. Opportunities for growth and career advancement in the data engineering domain. Competitive salary and benefits package. Flexible work arrangements and a supportive work environment Srishty Srivastava Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432456 srishty.srivastava@blackwhite.in |www.blackwhite.in
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Pune
Work from Office
Job Purpose Effectively capable of handling Development and Support in Scala/Python/Databricks technology Duties and Responsibilities Would be required to understand business logic from PMO team and convert it into technical specification. Would be responsible to build up data integration module between different systems. Would be responsible to build data expertise and own data quality. Would be responsible to prepare processes and best practices in day-to-day activities and get it implemented by team. Key Decisions / Dimensions Fast Decision making on Production Fix is expected. Major Challenges Define and Manage SLAs for all Support Activities Define and Meet Milestones for all Project Deliveries Required Qualifications and Experience a)Qualifications Min. Qualification required is Graduation b)Work Experience Relevant work experience of 2 to 5 Years. C)Skills Keywords Hands on Experience on Scala/Python Knowledge on Joins, classes, functions, ETL in Scala/Python Knowledge in techniques on Optimization of Code in Databricks Experience on other Azure platforms will be added advantage
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Pune
Work from Office
Job Purpose Effectively capable of handling Development and Support in Scala/Python/Databricks technology Duties and Responsibilities Would be required to understand business logic from PMO team and convert it into technical specification. Would be responsible to build up data integration module between different systems. Would be responsible to build data expertise and own data quality. Would be responsible to prepare processes and best practices in day-to-day activities and get it implemented by team. Key Decisions / Dimensions Fast Decision making on Production Fix is expected. Major Challenges Define and Manage SLAs for all Support Activities Define and Meet Milestones for all Project Deliveries Required Qualifications and Experience a)Qualifications Min. Qualification required is Graduation b)Work Experience Relevant work experience of 2 to 5 Years. C)Skills Keywords Hands on Experience on Scala/Python Knowledge on Joins, classes, functions, ETL in Scala/Python Knowledge in techniques on Optimization of Code in Databricks Experience on other Azure platforms will be added advantage
Posted 3 months ago
3 - 6 years
3 - 6 Lacs
Hyderabad
Work from Office
Hadoop Admin 1 Position Hadoop administration Automation (Ansible, shell scripting or python scripting) DEVOPS skills (Should be able to code at least in one language preferably python Location: Preferably Bangalore, Otherwise Chennai, Pune, Hyderabad Working Type: Remote
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Title Spark Developer - Immediate Joiner Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred Skills: Technology->Big Data - Data Processing->Spark->Spark Sreaming Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCA Service Line Data & Analytics Unit* Location of posting is subject to business requirements CLICK TO PROCEED
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
JR REQ -BigData Engineer --4to8year---HYD----Karuppiah Mg --- TCS C2H ---900000
Posted 3 months ago
6 - 11 years
0 - 3 Lacs
Bengaluru
Work from Office
SUMMARY This is a remote position. Job Description: EMR Admin We are seeking an experienced EMR Admin with expertise in Big data services such as Hive, Metastore, H-base, and Hue. The ideal candidate should also possess knowledge in Terraform and Jenkins. Familiarity with Kerberos and Ansible tools would be an added advantage, although not mandatory. Additionally, candidates with Hadoop admin skills, proficiency in Terraform and Jenkins, and the ability to handle EMR Admin responsibilities are encouraged to apply. Location: Remote Experience: 6+ years Must-Have: The candidate should have 4 years in EMR Admin. Requirements Requirements: Proven experience in EMR administration Proficiency in Big data services including Hive, Metastore, H-base, and Hue Knowledge of Terraform and Jenkins Familiarity with Kerberos and Ansible tools (preferred) Experience in Hadoop administration (preferred)
Posted 3 months ago
3 - 8 years
5 - 9 Lacs
Chennai
Work from Office
Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : PySpark, Bigdata Analytics Architecture and Design, Scala Programming Language, ETL processes, and data wareh, Data Engineer with a focus on Minimum 3 year(s) of experience is required Educational Qualification : btech Summary :As an Application Designer, you will be responsible for assisting in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve working with Google BigQuery and utilizing your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop efficient and effective data solutions. Roles & Responsibilities: Design and develop efficient and effective data solutions using Google BigQuery and other relevant technologies. Assist in defining requirements and designing applications to meet business process and application requirements. Collaborate with cross-functional teams to ensure the successful implementation of data solutions. Develop and maintain technical documentation related to data solutions. Stay updated with the latest advancements in data engineering and big data technologies. Professional & Technical Skills: Must To Have Skills:Proficiency in Google BigQuery. Good To Have Skills:Scala Programming Language, Bigdata Analytics Architecture and Design, PySpark, ETL processes, and data warehousing. Experience in designing and developing efficient and effective data solutions. Strong understanding of data engineering and big data technologies. Experience in collaborating with cross-functional teams. Excellent communication and documentation skills. Additional Information: The candidate should have a minimum of 3 years of experience in Google BigQuery. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Chennai office.
Posted 3 months ago
3 - 8 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Hadoop Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years of full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Apache Hadoop. Your typical day will involve working with the Hadoop ecosystem, developing and testing applications, and troubleshooting issues. Roles & Responsibilities: Design, develop, and test applications using Apache Hadoop and related technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Troubleshoot and debug issues in the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig. Ensure the performance, scalability, and reliability of applications by optimizing code and configurations. Professional & Technical Skills: Must To Have Skills:Experience with Apache Hadoop. Strong understanding of the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig. Experience with Java or Scala programming languages. Familiarity with SQL and NoSQL databases. Experience with data ingestion, processing, and analysis using Hadoop tools like Sqoop, Flume, and Spark. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Hadoop. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Pune office. Qualification 15 years of full time education
Posted 3 months ago
2 - 4 years
5 - 9 Lacs
Kolkata
Work from Office
Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : PySpark, Bigdata Analytics Architecture and Design, Scala Programming Language, ETL processes, and data wareh, Data Engineer with a focus on Minimum 2 year(s) of experience is required Educational Qualification : Btech Summary :As an Application Designer, you will be responsible for assisting in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve working with Google BigQuery and utilizing your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop data-driven solutions. Roles & Responsibilities: Design and develop applications using Google BigQuery to meet business process and application requirements. Collaborate with cross-functional teams to define requirements and ensure that applications are designed to meet business needs. Utilize your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop data-driven solutions. Ensure that applications are designed to be scalable, reliable, and maintainable. Stay updated with the latest advancements in Big Data technologies and integrate innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in Google BigQuery. Good To Have Skills:Scala Programming Language, Bigdata Analytics Architecture and Design, PySpark, ETL processes, and data warehousing. Solid understanding of data engineering principles and best practices. Experience in designing and developing data-driven solutions. Experience in working with large datasets and designing scalable solutions. Strong problem-solving skills and ability to work in a fast-paced environment. Additional Information: The candidate should have a minimum of 2 years of experience in Google BigQuery. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Hyderabad office. Qualification Btech
Posted 3 months ago
3 - 7 years
5 - 9 Lacs
Mumbai
Work from Office
Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Apache Kafka Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Minimum 15 years of full time education Summary :As an Application Designer, you will be responsible for assisting in defining requirements and designing applications to meet business process and application requirements using Apache Kafka. Your typical day will involve working with cross-functional teams, analyzing requirements, and designing scalable and reliable applications. Roles & Responsibilities: Collaborate with cross-functional teams to analyze business requirements and design scalable and reliable applications using Apache Kafka. Design and develop Kafka-based solutions for real-time data processing and streaming. Ensure the performance, scalability, and reliability of Kafka clusters and applications. Implement security and access control measures for Kafka clusters and applications. Stay updated with the latest advancements in Kafka and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in Apache Kafka. Good To Have Skills:Experience with Apache Spark, Apache Flink, and other big data technologies. Experience in designing and developing Kafka-based solutions for real-time data processing and streaming. Strong understanding of Kafka architecture, configuration, and performance tuning. Experience in implementing security and access control measures for Kafka clusters and applications. Solid grasp of distributed systems and microservices architecture. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Kafka. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful Kafka-based solutions. This position is based at our Mumbai office. Qualification Minimum 15 years of full time education
Posted 3 months ago
5 - 9 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Scala Programming Language Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : NA Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using Scala Programming Language. Your typical day will involve collaborating with cross-functional teams, managing project timelines, and ensuring the successful delivery of high-quality software solutions. Roles & Responsibilities: Lead the design, development, and deployment of software applications using Scala Programming Language. Collaborate with cross-functional teams to identify and prioritize project requirements, ensuring timely delivery of high-quality software solutions. Manage project timelines and resources, ensuring successful project delivery within budget and scope. Provide technical leadership and mentorship to junior team members, promoting a culture of continuous learning and improvement. Stay up-to-date with emerging trends and technologies in software engineering, applying innovative approaches to drive sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in Scala Programming Language. Good To Have Skills:Experience with Java, Python, or other programming languages. Strong understanding of software engineering principles and best practices. Experience with Agile development methodologies and tools such as JIRA or Confluence. Experience with cloud-based technologies such as AWS or Azure. Solid grasp of database technologies such as SQL or NoSQL. Additional Information: The candidate should have a minimum of 5 years of experience in Scala Programming Language. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions. This position is based at our Hyderabad office. Qualification NA
Posted 3 months ago
7 - 11 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Hadoop Administration, Risk Asessment URS Preparation Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Graduation Summary :As a Hadoop Administrator, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve working with Hadoop, managing and monitoring Hadoop clusters, and ensuring the security and scalability of the Hadoop infrastructure. Roles & Responsibilities: Lead the design, build, and configuration of Hadoop applications to meet business process and application requirements. Manage and monitor Hadoop clusters, ensuring the security and scalability of the Hadoop infrastructure. Collaborate with cross-functional teams, applying expertise in Hadoop administration and related technologies. Communicate technical findings effectively to stakeholders, utilizing data visualization tools for clarity. Stay updated with the latest advancements in Hadoop administration and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Experience in Hadoop administration. Good To Have Skills:Experience in related technologies such as Hive, Pig, and HBase. Strong understanding of Hadoop architecture and related technologies. Experience in managing and monitoring Hadoop clusters. Experience in ensuring the security and scalability of the Hadoop infrastructure. Additional Information: The candidate should have a minimum of 7.5 years of experience in Hadoop administration. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bengaluru office.QualificationsGraduation
Posted 3 months ago
4 - 6 years
6 - 8 Lacs
Hyderabad
Work from Office
Python/Spark/Scala experience AWS experienced will be added advantage. Professional hand-on experience in scala/python Having around 4 to 6 years of experience with excellent coding skills in Java programming language. Having knowledge (or hands on experience) of big data platform and frameworks is good to have. Candidate should have excellent code understanding skills where in should be able to read opensource code (Trino)and build optimizations or improvements on top of it. Working experience in Presto/Trino is a great advantage. Knowledge in Elastic Search, Grafana will be good to have. Experience working under Agile methodology Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience Having around 4 to 6 years of experience with excellent coding skills in Java programming language. Having knowledge (or hands on experience) of big data platform and frameworks is good to have. Candidate should have excellent code understanding skills where in should be able to read opensource code (Trino)and build optimizations or improvements on top of it
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Pune
Work from Office
Job Title:Solutions IT Developer - Kafka SpecialistLocation:Toronto/ Offshore -Pune About The Role ::We are seeking a seasoned Solutions IT Developer with a strong background in Apache Kafka to join our developer advocacy function in our event streaming team. The ideal candidate will be responsible for Kafka code reviews with clients, troubleshooting client connection issues with Kafka and supporting client onboarding to Confluent Cloud. This role requires a mix of software development expertise along with a deep understanding of Kafka architecture, components, and tuning. Responsibilities:1. Support for Line of Business (LOB) Users: - Assist LOB users with onboarding to Apache Kafka (Confluent Cloud/Confluent Platform), ensuring a smooth integration process and understanding of the platforms capabilities. 2. Troubleshooting and Technical Support: - Resolve connectivity issues, including client and library problems, to ensure seamless use of our Software Development Kit (SDK), accelerators and Kafka client libraries. - Address network connectivity and access issues - Provide a deep level of support for Kafka library, offering advanced troubleshooting and guidance. - Java 11, 17 and Spring Boot (Spring Kafka, Spring Cloud Stream Kafka Spring Cloud Stream) experience 3. Code Reviews and Standards Compliance: - Perform thorough code reviews to validate client code against our established coding standards and best practices. - Support the development of Async specifications tailored to client use cases, promoting effective and efficient data handling.4. Developer Advocacy: - Act as a developer advocate for all Kafka development at TD, fostering a supportive community and promoting best practices among developers.5. Automation and APIs: - Manage and run automation pipelines for clients using REST APIs as we build out GitHub Actions flow.6. Documentation and Knowledge Sharing: - Update and maintain documentation standards, including troubleshooting guides, to ensure clear and accessible information is available. - Create and disseminate knowledge materials, such as how-tos and FAQs, to answer common client questions in general chats related to Kafka development. Role Requirements:Qualifications:- Bachelors degree in Computer Science- Proven work experience as a Solutions Developer or similar role with a focus on Kafka design and development Skills:- In-depth knowledge of Java 11, 17 and Spring Boot (Spring Kafka, Spring Cloud Stream Kafka Spring Cloud Stream)- Deep knowledge of Apache Kafka, including Kafka Streams and Kafka Connect experience- Strong development skills in one or more high-level programming languages (Java, Python).- Familiarity with Kafka API development and integration.- Understanding of distributed systems principles and data streaming concepts.- Experience with source control tools such as Git, and CI/CD pipelines.- Excellent problem-solving and critical-thinking skills.Preferred:- Kafka certification (e.g., Confluent Certified Developer for Apache Kafka).- Experience with streaming data platforms and ETL processes.- Prior work with NoSQL databases and data warehousing solutions. Experience:- Minimum of 4 years of hands-on experience with Apache Kafka.- Experience with large-scale data processing and event-driven system design. Other Requirements:- Good communication skills, both written and verbal.- Ability to work independently as well as collaboratively.- Strong analytical skills and attention to detail.- Willingness to keep abreast of industry developments and new technologies.
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Noida
Work from Office
About The Role : Role Purpose The purpose of this role is to interpret data and turn into information (reports, dashboards, interactive visualizations etc) which can offer ways to improve a business, thus affecting business decisions. Do 1. Managing the technical scope of the project in line with the requirements at all stages a. Gather information from various sources (data warehouses, database, data integration and modelling) and interpret patterns and trends b. Develop record management process and policies c. Build and maintain relationships at all levels within the client base and understand their requirements. d. Providing sales data, proposals, data insights and account reviews to the client base e. Identify areas to increase efficiency and automation of processes f. Set up and maintain automated data processes g. Identify, evaluate and implement external services and tools to support data validation and cleansing. h. Produce and track key performance indicators 2. Analyze the data sets and provide adequate information a. Liaise with internal and external clients to fully understand data content b. Design and carry out surveys and analyze survey data as per the customer requirement c. Analyze and interpret complex data sets relating to customers business and prepare reports for internal and external audiences using business analytics reporting tools d. Create data dashboards, graphs and visualization to showcase business performance and also provide sector and competitor benchmarking e. Mine and analyze large datasets, draw valid inferences and present them successfully to management using a reporting tool f. Develop predictive models and share insights with the clients as per their requirement Deliver NoPerformance ParameterMeasure1.Analyses data sets and provide relevant information to the clientNo. Of automation done, On-Time Delivery, CSAT score, Zero customer escalation, data accuracy
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
GCP:Data engineer Hands on and deep experience working with Google Data Products (e.g. BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep, etc.). Hands on experience in SQL and Unix scripting Experience in Python and Kafka. ELT Tool Experience and Hands on DBT Google Cloud Professional Data Engineers are responsible for developing Extract, Transform, and Load (ETL) processes to move data from various sources into the Google Cloud Platform. Detailed JD : Must Have Around 8 to 11 years of experience with a strong knowledge in migrating on premise ETLs to Google Cloud Platform (GCP) ? 2-3 years of Strong Bigquery+GCP Experience. Very Strong SQL writing skills Hand on Experience in Python Programming Hands on experience in Design, Development, Implementation of Data Warehousing in ETL process. Experience in IT data analytics projects, hands on experience in migrating on premise ETLs to Google Cloud Platform (GCP) using cloud native tools such as BIG query, Google Cloud Storage, Composer, Dataflow, Cloud Functions. GCP certified Associate Cloud Engineer. Practical understanding of the Data modelling (Dimensional & Relational), Performance Tuning and debugging. Extensive experience in the Data Warehousing using Data Extraction, Data Transformation, Data Loading and business intelligence technologies using ELT design Working experience in CI /CD using Gitlab and Jenkins. Good to Have DBT tool experience Practical experience in Big Data application development involving various data processing techniques for Data Ingestion, Data Modelling In-Stream data processing and Batch Analytics using various distributions of Hadoop and its ecosystem tools like HDFS, HIVE, PIG, Sqoop, Spark. Document all the work implemented using Confluence and track all requests and changes using Jira. Involved in both technical and managerial activities and experience in GCP Responsibilities ? Create and maintain optimal data pipeline architecture. ? Assemble large, complex data sets that meet functional / non-functional business requirements. ? Identify, design, and implement internal process improvements:automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ? Build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP data warehousing technologies. ? Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. ? Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ? Keep our data separated and secure across national boundaries through data centers and GCP regions. ? Work with data and analytics experts to strive for greater functionality in the data systems. Qualifications ? Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. ? Experience building and optimizing data warehousing data pipelines (ELT and ETL), architectures and data sets. ? Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. ? Strong analytic skills related to working with unstructured datasets. ? Build processes supporting data transformation, data structures, metadata, dependency and workload management. ? A successful history of manipulating, processing and extracting value from large disconnected datasets. ? Working knowledge of message queuing, stream processing, and highly scalable big data data stores. ? Strong project management and organizational skills. ? Experience supporting and working with cross-functional teams in a dynamic environment. Technical Skillset ? Experience with data warehouse tools:GCP BigQuery, GCP BigData, Dataflow, Teradata, etc. ? Experience with relational SQL and NoSQL databases, including PostgreSQL and MongoDB. ? Experience with data pipeline and workflow management tools:Data Build Tool (DBT), Airflow, Google Cloud Composer, Google Cloud PubSub, etc. ? Experience with GCP cloud services:Primarily BigQuery, Kubernetes, Cloud Function, Cloud Composer, PubSub etc. ? Experience with object-oriented/object function scripting languages:Python, Java, Terraform etc. ? Experience with CICD pipeline and workflow management tools:GitHub Enterprise, Cloud Build, Codefresh etc. ? Experience with Data Analytics and Visualization Tools:Tableau BI Tool (OnPrem and SaaS), Data Analytics Workbench (DAW), Visual Data Studio etc. ? GCP Data Engineer certification is mandatory
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Wipro Limited (NYSE:WIT, BSE:507685, NSE:WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. 1. Understanding the NBA requirements. 2. Provide Subject matter expertise in relation to Pega CDH from technology perspective. 3. Participate actively in the creation and review of the Conceptual Design, Detailed Design, and estimations. 4. Implementing the NBAs as per agreed requirement/solution 5. Supporting the end-to-end testing and provide fixes with quick TAT. 6. Deployment knowledge to manage the implementation activities. 7. Experience in Pega CDH v8.8 multi app or 24.1 and retail banking domain is preferred. 8. Good communication skills 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Reinvent your world.We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 3 months ago
3 - 7 years
5 - 9 Lacs
Bengaluru
Work from Office
Skill Application Consultant EAI JD Total Years of Experience - 12 years. Relevant years of Experience - 05-10 years Mandatory Skills - Cloud Deployment/Administration skills of Tibco BWCE applications in at least one of these i.e., GCP, AWS, RH OCP based environments. Good to have - 1. Clear understanding of similarities and differences between VM (Virtual Machines) and Containerized Bot applications. 2. Exposure to messaging systems like Tibco EMS, its management, administration and interfacing with BWCE. Detailed About The Role : - 1. Good understanding of Cloud based (preferably Red Hat s OCP), containerized applications and architecture. 2. CI/CD mechanism for BWCE using Docker/Kubernetes. Good idea of Image creation, configuration and management using root image etc. 3. Good idea of Docker repository/registry for storing images. 4. All Platform based CLI based tools (like OCP s oc.exe) for CI/CD automation, route creation, exposing ports etc. 5. Good in distinguishing On-prem and Cloud Platform Technical Administration and Management Requirements for Tibco BW6 & BWCE respectively. 6. Well versed with run-time aspects like Load Balancing, Clustering, their implementation using external LB and/or using Messaging. 7. Must know the performance tuning mechanism/parameters for Horizontal/Vertical Scalability of BWCE containers. 8. Knowledge of exposing internal, Containerized Bot applications to External clients. Experience 9+yrs Job location Bangalore
Posted 3 months ago
8 - 13 years
25 - 30 Lacs
Chennai, Pune, Delhi
Work from Office
Strong expertise in planning and developing hands-on customer-facing applications. Proactively work to develop the skills and behaviors of each team member, coaching and mentoring them toward success in their tasks and careers Collaborate with management, stakeholders, and the team to evaluate, plan, and deliver projects involving advertising software systems, components, and features Promote transparency regarding the team s activities across the organization through documentation, structured processes, and effective communications Facilitate the team s learning, growth, and maturity in their agile practices Lead by example writing production-ready front-end and backend-code with a continuous delivery deployment pipeline on AWS Support employees by having regular 1:1 conversation, providing feedback, planning for and conducting quarterly reviews, scheduling time-off, facilitating training, providing recognition, and managing any personal or performance issues Create and maintain various ingestion pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, GIS and AWS big data technologies. Contribute ideas to improve the location platform. You are: This is our ideal wish list, but most people don t check every box on every job description. If you meet most of the criteria below and you re excited about the opportunity and willing to learn, we d love to hear from you. Detail Oriented- The little things matter Adaptable and able to pivot to meet demands and carry out expectations Organized and have demonstrated the ability to prioritize and deliver work in a timely manner Able to work under strict deadlines and be able to prioritize a heavy workload A team-player and not afraid to roll up your sleeves and help out when/where needed Self-Sufficient and not afraid to take the lead and manage tasks independently Coachable and open to feedback Respectful- We treat each other with respect and assume the best of one another You have: Tech./B.E./M.Tech./MCA or equivalent in computer science with 8+ years experience in technology and 2+ years of experience in management Experience with GIS, POI/Location data ingestion pipeline . Experience with AWS Stack used for Data engineering EC2, S3, EMR, ECS, Lambda, and Step functions etc. Hands on experience with Python/Java for orchestration of data pipelines Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How you can impress us: Knowledge of Rest APIs, FAST API and front end Any experience with big data technologies like Hadoop, Map Reduce, Pig, PySpark is a plus Knowledge of shell scripting Experience with BI tools like Looker and Tableau Experience with DB design and maintenance Experience with Amazon Web Services and Dockers Configuration management and QA practices
Posted 3 months ago
3 - 5 years
7 - 11 Lacs
Pune
Work from Office
About the job The Red Hat Global Support Services team is looking for a highly-skilled Technical Support Engineer with excellent technical and customer service skills to join us in Pune, India. In this role, you'll become an expert on Red Hat technologies, including Red Hat Enterprise Linux (RHEL) and our storage and cloud solutions, and you will provide technical support for our enterprise customers both online and over the phone. As a Technical Support Engineer, you will be responsible for performing basic investigation and troubleshooting to resolve difficult customer issues quickly and effectively. The customers you'll interact with are often other engineers, so we'll need you to develop your technical skills by becoming a Red Hat Certified Engineer (RHCE) and getting up to speed with our storage system. What will you do Provide technical support to Red Hat enterprise customers Work with Red Hat enterprise customers across the globe on 24x7 basis that requires one to work in different shifts periodically Diagnose problems, troubleshoot customer issues, and develop solutions to technical issues Exceed customer expectations by providing outstanding customer service Consult and develop relationships with in-house engineers and developers to promote creative solutions and improve customer satisfaction Contribute to the global Red Hat knowledge management system while working on customer issues What will you bring 3+ years of Linux experience in the enterprise sector, with excellent administration skills Thorough knowledge of the Linux kernel, networking, and memory management Excellent troubleshooting skills, and a passion for problem solving and investigation Ability to work with conflicting priorities, take initiative, and maintain a customer-centric focus
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2