Jobs
Interviews

344 Hdfs Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

3 - 6 Lacs

Bengaluru, Karnataka, India

On-site

The Senior Associate People Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Qualifications Your Skills & Experience: Minimum 3 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on AWS Bachelor's degree and year of work experience of 6 to 8 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients? businesses through designing the products and services their customers truly value

Posted 1 month ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Bengaluru, Karnataka, India

On-site

Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end-to-end data pipelines. Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on AWS Bachelor's degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender-Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers.We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity.United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across truly value.

Posted 1 month ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Bengaluru, Karnataka, India

On-site

The Senior Associate People Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Qualifications Your Skills & Experience: Minimum 3 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, Kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on Azure Bachelor's degree and year of work experience of 6 to 8 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications Additional Information A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work Additional Information Gender Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients? businesses through designing the products and services their customers truly value

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Software Engineer, you will be responsible for utilizing your 5-12 years of experience in Big Data & Data-related technologies to contribute to the success of projects in Chennai and Coimbatore in a Hybrid work mode. You should possess an expert level understanding of distributed computing principles and a strong knowledge of Apache Spark, with hands-on programming skills in Python. Your role will involve working with technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming to build stream-processing systems. You should have a good grasp of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources including RDBMS, ERP, and Files. Experience with NoSQL databases such as HBase, Cassandra, MongoDB, and knowledge of ETL techniques and frameworks will be essential for this role. You will be tasked with performance tuning of Spark Jobs, working with AZURE Databricks, and leading a team efficiently. Additionally, your expertise in designing and implementing Big Data solutions, along with a strong understanding of SQL queries, joins, stored procedures, and relational schemas will be crucial. As a practitioner of AGILE methodology, you will play a key role in the successful delivery of data-driven projects.,

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At Iron Mountain, we believe that work, when done well, can have a positive impact on our customers, employees, and the planet. That's why we are looking for smart and committed individuals to join our team. Whether you are starting your career or seeking a change, we invite you to explore how you can enhance the impact of your work at Iron Mountain. We offer expert and sustainable solutions in records and information management, digital transformation services, data centers, asset lifecycle management, and fine art storage, handling, and logistics. Collaborating with over 225,000 customers worldwide, we aim to preserve valuable artifacts, optimize inventory, and safeguard data privacy through innovative and socially responsible practices. If you are interested in being part of our growth journey and expanding your skills in a culture that values diverse contributions, let's have a conversation. As Iron Mountain progresses with its digital transformation, we are expanding our Enterprise Data Platform Team, which plays a crucial role in supporting data integration solutions, reporting, and analytics. The team focuses on maintaining and enhancing data platform components essential for delivering our data solutions. As a Data Platform Engineer at Iron Mountain, you will leverage your advanced knowledge of cloud big data technologies, software development expertise, and strong SQL skills. The ideal candidate will have a background in software development and big data engineering, with experience working in a remote environment and supporting both on-shore and off-shore engineering teams. Key Responsibilities: - Building and operationalizing cloud-based platform components - Developing production-quality ingestion pipelines with automated quality checks to centralize access to all data sets - Assessing current system architecture and recommending solutions for improvement - Building automation using Python modules to support product development and data analytics initiatives - Ensuring maximum uptime of the platform by utilizing cloud technologies such as Kubernetes, Terraform, Docker, etc. - Resolving technical issues promptly and providing guidance to development teams - Researching current and emerging technologies and proposing necessary changes - Assessing the business impact of technical decisions and participating in collaborative environments to foster new ideas - Maintaining comprehensive documentation on processes and decision-making Your Qualifications: - Experience with DevOps/Automation tools to minimize operational overhead - Ability to contribute to self-organizing teams within the Agile/Scrum project methodology - Bachelor's Degree in Computer Science or related field - 3+ years of related IT experience - 1+ years of experience building complex ETL pipelines with dependency management - 2+ years of experience in Big Data technologies such as Spark, Hive, Hadoop, etc. - Industry-recognized certifications - Strong familiarity with PaaS services, containers, and orchestrations - Excellent verbal and written communication skills What's in it for you - Be part of a global organization focused on transformation and innovation - A supportive environment where you can voice your opinions and be your authentic self - Global connectivity to learn from teammates across 52 countries - Embrace diversity, inclusion, and differences within a winning team - Competitive Total Reward offerings to support your career, family, wellness, and retirement Iron Mountain is a global leader in storage and information management services, trusted by organizations worldwide. We safeguard critical business information, sensitive data, and cultural artifacts. Our services help lower costs, mitigate risks, comply with regulations, and enable digital solutions. If you require accommodations due to a disability, please reach out to us. Category: Information Technology,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

coimbatore, tamil nadu

On-site

As a Big Data Engineer at KGIS, you will be an integral part of the team dedicated to building cutting-edge digital and analytics solutions for global enterprises. With a focus on designing, developing, and optimizing large-scale data processing systems, you will lead the way in creating scalable data pipelines, driving performance tuning, and spearheading cloud-native big data initiatives. Your responsibilities will include designing and developing robust Big Data solutions using Apache Spark, building both batch and real-time data pipelines utilizing technologies like Spark, Spark Streaming, Kafka, and RabbitMQ, implementing ETL processes for data ingestion and transformation, and optimizing Spark jobs for enhanced performance and scalability. You will also work with NoSQL technologies such as HBase, Cassandra, or MongoDB, query large datasets using tools like Hive and Impala, ensure seamless integration of data from various sources, and lead a team of data engineers while following Agile methodologies. To excel in this role, you must possess deep expertise in Apache Spark and distributed computing, strong programming skills in Python, solid experience with Hadoop v2, MapReduce, HDFS, and Sqoop, proficiency in real-time stream processing using Apache Storm or Spark Streaming, and familiarity with messaging systems like Kafka or RabbitMQ. Additionally, you should have SQL mastery, hands-on experience with NoSQL databases, knowledge of cloud-native services in AWS or Azure, a strong understanding of ETL tools and performance tuning, an Agile mindset, and excellent problem-solving skills. While not mandatory, exposure to data lake and lakehouse architectures, familiarity with DevOps tools for CI/CD and data pipeline monitoring, and certifications in cloud or big data technologies are considered advantageous. Joining KGIS will provide you with the opportunity to work on innovative projects with Fortune 500 clients, be part of a fast-paced and meritocratic culture that values ownership, gain access to cutting-edge tools and technologies, and thrive in a collaborative and growth-focused environment. If you are ready to elevate your Big Data career and contribute to our digital transformation journey, apply now and embark on this exciting opportunity at KGIS.,

Posted 1 month ago

Apply

2.0 - 10.0 years

0 Lacs

karnataka

On-site

The ideal candidate for the position of AL/ML Architect should possess a B.SC, B.E/B.Tech./M.E./M.S./M.Tech degree with a strong academic record. With at least 10 years of overall experience, including 4-5 years in an Architect role, the candidate should also have 4+ years of hands-on experience applying statistical/ML algorithms and techniques to real-world data sets. Additionally, 4+ years of experience as a developer working on scalable and secure applications is required, along with 2+ years of experience independently designing core product modules or complex components. Proficiency in GenAI is a must-have for this role. Key Responsibilities: The responsibilities of this role include designing and overseeing the architecture of ML systems, encompassing data pipelines, model selection, deployment strategies, and ensuring scalability and performance. The candidate should have a strong understanding of AI algorithms, data engineering, and software architecture. Collaboration with cross-functional teams to translate business needs into effective ML solutions is essential. Defining the architecture of ML systems, from data ingestion to deployment pipelines, and selecting appropriate ML algorithms based on data characteristics and business objectives are crucial aspects of the role. Designing and implementing data pipelines for data collection, cleaning, transformation, and feature engineering is also a key responsibility. Ensuring that ML systems can handle large volumes of data and deliver efficient predictions is vital. The candidate should have in-depth knowledge of multiple technological and architectural areas, applicable processes, methodologies, standards, products, and frameworks. Defining and documenting architecture, capturing non-functional requirements, preparing estimates, and defining technical solutions to proposals are part of the role. Providing technical leadership, mentoring a team of developers, and collaborating with multiple teams to make technical decisions are essential aspects. Practices such as reuse, defect prevention, process optimization, automation, and productivity enhancement should be adopted. Skills: The candidate should have expert knowledge of languages such as Python, proficiency in Probability, Statistics, and Linear Algebra, and proficiency in Machine Learning, Deep Learning, NLP (GenAI is an added advantage). Familiarity with data science platforms, tools, and frameworks is required. Designing scalable processes for collecting, manipulating, presenting, and analyzing large datasets in a production-ready environment is essential. Experience with large volumes of data, algorithms, and prototyping is necessary. Knowledge of Nifi, Kakfa, HDFS, Big Data, MongoDB, Cassandra, and CI/CD pipelines with tools like GIT, Jenkins, Maven, Ansible, Docker is preferred. Expertise or appreciation in Security products such as End-point detection, protection and response, Managed detection and response is also desirable for this role.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Support Engineer specializing in Big Data applications and systems within the Hadoop ecosystem, you will play a crucial role in ensuring the smooth operation, performance, and security of our data infrastructure. Your responsibilities will include providing technical support and troubleshooting for Big Data applications, monitoring system performance, collaborating with engineering teams to deploy and configure Hadoop clusters, and assisting in maintenance and upgrades of Hadoop environments. You will also be responsible for developing and maintaining documentation, implementing data backup and recovery procedures, participating in on-call rotations, and staying up to date with Hadoop technologies and support methodologies. Furthermore, you will contribute to the training and onboarding of new team members and users on Hadoop best practices. To excel in this role, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field, and have at least 3 years of experience in Big Data support or system administration, particularly with the Hadoop ecosystem. You should possess a strong understanding of Hadoop components such as HDFS, MapReduce, Hive, and Pig, as well as experience with system monitoring and diagnostics tools. Proficiency in Linux/Unix commands and scripting languages like Bash and Python is essential, along with a basic understanding of database technologies and data warehousing concepts. Strong problem-solving skills, excellent communication, and interpersonal abilities are also key requirements for this position. Additionally, you should be willing to work independently and collaboratively, learn new technologies, and enhance your skills continuously. In this role, you will have the opportunity to work with cutting-edge technologies, solve complex challenges, and grow your career in a dynamic and collaborative work environment. We offer a competitive salary and benefits package, regular training, and professional development opportunities to support your career advancement and personal growth. If you are passionate about Big Data, Hadoop technologies, and supporting mission-critical systems, we invite you to join our team and make a meaningful impact in the field of data infrastructure and analytics.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Your focus will be on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Your responsibilities will include developing Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. You will also need to develop and maintain Kafka-based data pipelines, which involves designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. Additionally, you will create and optimize Spark applications using Scala and PySpark to process large datasets and implement data transformations and aggregations. Another important aspect of your role will be integrating Kafka with Spark for real-time processing. You will be building systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. Collaboration with data teams including data engineers, data scientists, and DevOps is essential to design and implement data solutions effectively. You will also need to tune and optimize Spark and Kafka clusters to ensure high performance, scalability, and efficiency of data processing workflows. Writing clean, functional, and optimized code while adhering to coding standards and best practices will be a key part of your daily tasks. Troubleshooting and resolving issues related to Kafka and Spark applications, as well as maintaining documentation for Kafka configurations, Spark jobs, and other processes are also important aspects of the role. Continuous learning and applying new advancements in functional programming, big data, and related technologies is crucial. Proficiency in the Hadoop ecosystem big data tech stack (HDFS, YARN, MapReduce, Hive, Impala), Spark (Scala, Python), Kafka, ETL processes, and data ingestion tools is required. Deep hands-on expertise in Pyspark, Scala, Kafka, programming languages such as Scala, Python, or Java for developing Spark applications, and SQL for data querying and analysis are necessary. Additionally, familiarity with data warehousing concepts, Linux/Unix operating systems, problem-solving, analytical skills, and version control systems will be beneficial in performing your duties effectively. This is a full-time position in the Technology job family group, specifically in Applications Development. If you require a reasonable accommodation to use search tools or apply for a career opportunity due to a disability, please review Accessibility at Citi. You can also refer to Citis EEO Policy Statement and the Know Your Rights poster for more information.,

Posted 1 month ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position : AL/ML Architect Location : Mumbai/Bangalore Qualifications: The successful candidate should have either of the below B.SC, B.E/B.Tech./M.E./M.S./M.Tech with good academic record Experience: 10+ years overall with 4-5 years as Architect 4+ years of experience applying statistical/ML algorithms and techniques to real-world data sets. 4+ years as a developer with experience in related scalable and secure applications. 2+ years experience independently designing core product modules or complex components Hands on Experience on GenAI Key Responsibilities Involves in Designing and Overseeing the overall architecture of ML systems, including data pipelines, model selection, deployment strategies, and ensuring scalability and performance. Require a strong understanding of AI algorithms, data engineering, and software architecture, while collaborating with cross-functional teams to translate business needs into effective ML solutions. Define the overall architecture of ML systems, considering data ingestion, preprocessing, feature engineering, model training, evaluation, and deployment pipelines. Evaluate and select appropriate ML algorithms based on data characteristics and business objectives. Design and implement data pipelines for data collection, cleaning, transformation, and feature engineering. Ensure ML systems can handle large volumes of data and deliver efficient predictions. You are expected have depth/breadth of knowledge of specified multiple technological areas, and architectural areas, which includes knowledge of applicable processes, methodologies, standards, products and frameworks. You would be responsible for defining and documenting architecture, capturing and documenting non-functional (architectural) requirements, preparing estimates and defining technical solutions to proposals (RFPs). You should provide technical leadership, mentor and guide team of developers, who would be developing the solution. You need to collaborate with multiple teams to arrive at technical and tactical decisions. You should contribute and adopt practices such as reuse, defect prevention, process optimization, process automation, productivity enhancement. Skills Expert knowledge of languages such as Python Proficiency in Probability, Statistics and Linear Algebra Proficiency in Machine Learning, Deep Learning, NLP, [GenAI is add on] Familiarity with Data science platforms, Tools and frameworks Designs scalable processes to collect, manipulate, present, and analyse large datasets in a production-ready environment Experience with large volumes of data, algorithms, and prototyping Nifi, Kakfa, HDFS, Big Data, MongoDB, Cassandra. CI/CD pipelines with tools like GIT, Jenkins, Maven, Ansible,Docker Prefer great appreciation or expertise in Security products such as End point detection, protection and response, Managed detection and response etc Show more Show less

Posted 1 month ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCIs native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the services runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrows technology to tackle todays challenges. Weve partnered with industry-leaders in almost every sectorand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thats why were committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Were committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing [HIDDEN TEXT] or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies. Your expertise should include a deep understanding of distributed computing principles and strong knowledge of Apache Spark. Proficiency in Python programming is required, along with experience using technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming for building stream-processing systems. You should have a good understanding of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources such as RDBMS, ERP, and Files. Knowledge of SQL queries, joins, stored procedures, and relational schemas is essential. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is also expected. The role requires performance tuning of Spark Jobs, experience with AZURE Databricks, and the ability to efficiently lead a team. Designing and implementing Big Data solutions, as well as following AGILE methodology, are key aspects of this position.,

Posted 1 month ago

Apply

10.0 - 15.0 years

0 Lacs

delhi

On-site

As a seasoned data engineering professional with 10+ years of experience, you will lead and mentor a team of data engineers to ensure high performance and career growth. Your primary responsibility will be to architect and optimize scalable data infrastructure, guaranteeing high availability and reliability. Additionally, you will drive the development and implementation of data governance frameworks and best practices, collaborating closely with cross-functional teams to define and execute a data roadmap. Your expertise in backend development using languages like Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS will be crucial. Proficiency in SQL, Python, and Scala for data processing and analytics is a must. In-depth knowledge of cloud platforms such as AWS, GCP, or Azure is required, along with hands-on experience in big data technologies like Spark, Hadoop, Kafka, and distributed computing frameworks. You will be responsible for ensuring data security, compliance, and quality across all data platforms while optimizing data processing workflows for performance and cost efficiency. A strong foundation in High-Level Design (HLD) and Low-Level Design (LLD), as well as design patterns, preferably using Spring Boot or Google Guice, is necessary. Experience with data warehousing solutions like Snowflake, Redshift, or BigQuery will be beneficial. Your role will also involve working with NoSQL databases such as Redis, Cassandra, MongoDB, and TiDB, as well as familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy aligned with business objectives, strong leadership, communication, and stakeholder management skills are essential for this position. Candidates from Tier 1 colleges/universities with a background in product startups and experience in implementing Data Engineering systems from an early stage in the company are preferred. Additionally, experience in machine learning infrastructure or MLOps, exposure to real-time data processing and analytics, and interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture will be advantageous. Prior experience in a SaaS or high-growth tech company will be a plus. If you are a highly skilled data engineer with a passion for innovation and technical excellence, we invite you to apply for this challenging and rewarding opportunity.,

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior Software Engineer Experience: 58 Years Location: Bangalore (On-site/Hybrid as applicable) Job Summary: Were looking for a Senior Software Engineer with deep expertise in Java , Big Data , and Cloud-based distributed systems . Key Responsibilities: Design and architect core product features in a distributed, microservices-based cloud environment. Build scalable, high-performance systems on AWS / Azure cloud platforms. Work with Big Data technologies such as Hadoop, HDFS, Hive , and NoSQL databases. Apply strong knowledge of data structures , algorithms , and system design principles. Lead and mentor junior engineers; drive design and code reviews. Own the full software development lifecycle (SDLC) from concept to deployment. Continuously improve system performance, reliability, and scalability. Collaborate with cross-functional and global teams to deliver high-quality solutions. Requirements: 58 years of hands-on experience in Java, distributed systems, microservices, and cloud platforms (AWS or Azure). Strong background in algorithms, data structures, and system design. Experience with Big Data technologies Hadoop ecosystem, HDFS, Hive, etc. Proven experience designing and building scalable systems. BS/MS in Computer Science or a related field. Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You will be working as an Informatica BDM professional at PibyThree Consulting Pvt Ltd. in Pune, Maharashtra. PibyThree is a global cloud consulting and services provider, focusing on Cloud Transformation, Cloud FinOps, IT Automation, Application Modernization, and Data & Analytics. The company's goal is to help businesses succeed by leveraging technology for automation and increased productivity. Your responsibilities will include: - Having a minimum of 4+ years of development and design experience in Informatica Big Data Management - Demonstrating excellent SQL skills - Working hands-on with HDFS, HiveQL, BDM Informatica, Spark, HBase, Impala, and other big data technologies - Designing and developing BDM mappings in Hive mode for large volumes of INSERT/UPDATE - Creating complex ETL mappings using various transformations such as Source Qualifier, Sorter, Aggregator, Expression, Joiner, Dynamic Lookup, Lookups, Filters, Sequence, Router, and Update Strategy - Ability to debug Informatica and utilize tools like Sqoop and Kafka This is a full-time position that requires you to work in-person during day shifts. The preferred education qualification is a Bachelor's degree, and the preferred experience includes a total of 4 years of work experience with 2 years specifically in Informatica BDM.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Analyst Data Engineer Introduction To Role Are you ready to make a difference in the world of data science and advanced analytics As a Data Engineer within the Commercial Strategic Data Management team, you&aposll play a pivotal role in transforming data science solutions for the Rare Disease Unit. Your mission will be to craft, develop, and deploy data science solutions that have a real impact on patients' lives. By leveraging cutting-edge tools and technology, you&aposll enhance delivery performance and data engineering capabilities, creating a seamless platform for the Data Science team and driving business growth. Collaborate closely with the Data Science and Advanced Analytics team, US Commercial leadership, Sales Field Team, and Field Operations to build data science capabilities that meet commercial needs. Are you ready to take on this exciting challenge Accountabilities Collaborate with the Commercial Multi-functional team to find opportunities for using internal and external data to enhance business solutions. Work closely with business and advanced data science teams on cross-functional projects, delivering complex data science solutions that contribute to the Commercial Organization. Manage platforms and processes for complex projects using a wide range of data engineering techniques in advanced analytics. Prioritize business and information needs with management; translate business logic into technical requirements, such as creating queries, stored procedures, and scripts. Interpret data, process it, analyze results, present findings, and provide ongoing reports. Develop and implement databases, data collection systems, data analytics, and strategies that optimize data efficiency and quality. Acquire data from primary or secondary sources and maintain databases/data systems. Identify and define new process improvement opportunities. Manage and support data solutions in BAU scenarios, including data profiling, designing data flow, creating business alerts for fields, and query optimization for ML models. Essential Skills/Experience BS/MS in a quantitative field (Computer Science, Data Science, Engineering, Information Systems, Economics) 5+ years of work experience with DB skills like Python, SQL, Snowflake, Amazon Redshift, MongoDB, Apache Spark, Apache Airflow, AWS cloud and Amazon S3 experience, Oracle, Teradata Good experience in Apache Spark or Talend Administration Center or AWS Lambda, MongoDB, Informatica, SQL Server Integration Services Experience in building ETL pipeline and data integration Build efficient Data Management (Extract, consolidate and store large datasets with improved data quality and consistency) Streamlined data transformation: Convert raw data into usable formats at scale, automate tasks, and apply business rules Good written and verbal skills to communicate complex methods and results to diverse audiences; willing to work in a cross-cultural environment Analytical mind with problem-solving inclination; proficiency in data manipulation, cleansing, and interpretation Experience in support and maintenance projects, including ticket handling and process improvement Setting up Workflow Orchestration (Schedule and manage data pipelines for smooth flow and automation) Importance of Scalability and Performance (handling large data volumes with optimized processing capabilities) Experience with Git Desirable Skills/Experience Knowledge of distributed computing and Big Data Technologies like Hive, Spark, Scala, HDFS; use these technologies along with statistical tools like Python/R Experience working with HTTP requests/responses and API REST services Familiarity with data visualization tools like Tableau, Qlik, Power BI, Excel charts/reports Working knowledge of Salesforce/Veeva CRM, Data governance, and Data mining algorithms Hands-on experience with EHR, administrative claims, and laboratory data (e.g., Prognos, IQVIA, Komodo, Symphony claims data) Good experience in consulting, healthcare, or biopharmaceuticals When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That&aposs why we work, on average, a minimum of three days per week from the office. But that doesn&apost mean we&aposre not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca&aposs Alexion division, you&aposll find an environment where your work truly matters. Embrace the opportunity to grow and innovate within a rapidly expanding portfolio. Experience the entrepreneurial spirit of a leading biotech combined with the resources of a global pharma. You&aposll be part of an energizing culture where connections are built to explore new ideas. As a member of our commercial team, you&aposll meet the needs of under-served patients worldwide. With tailored development programs designed for skill enhancement and fostering empathy for patients' journeys, you&aposll align your growth with our mission. Supported by exceptional leaders and peers across marketing and compliance, you&aposll drive change with integrity in a culture celebrating diversity and innovation. Ready to make an impact Apply now to join our team! Date Posted 29-Jul-2025 Closing Date 04-Aug-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact [HIDDEN TEXT]. Alexion participates in E-Verify. Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

3 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Job description - Define, implement and validate solution frameworks and architecture patterns for data modeling, data integration, processing, reporting, analytics and visualization using leading cloud, big data, open-source and other enterprise technologies. - Develop scalable data and analytics solutions leveraging standard platforms, frameworks, patterns and full stack development skills. - Analyze, characterize and understand data sources, participate in design discussions and provide guidance related to database technology best practices. - Write tested, robust code that can be quickly moved into production Responsibilities : - Experience with distributed data processing and management systems. - Experience with cloud technologies including Spark SQL, Java/ Scala, HDFS, AWS EC2, AWS S3, etc. - Familiarity with leveraging and modifying open source libraries to build custom frameworks. Primary Technical Skills : - Spark SQL, Java/ Scala, Sbt/ Maven/ Gradle, HDFS, Hive, AWS(EC2, S3, SQS, EMR, Glue Scripts, Lambda, Step Functions), IntelliJ IDE, JIRA, Git, Bitbucket/GitLab, Linux, Oozie.

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at Lifesight, you will play a crucial role in the Data and Business Intelligence organization by focusing on deep data engineering projects. Joining the data platform team in Bengaluru, you will have the opportunity to contribute to defining the technical strategy and data engineering team culture in India. Your responsibilities will include designing and constructing data platforms and services, as well as managing data infrastructure in cloud environments to support strategic business decisions across Lifesight products. You will be expected to build highly scalable distributed data processing systems, data solutions, and data pipelines that optimize data quality and are resilient to poor-quality data sources. Additionally, you will own data mapping, business logic, transformations, and data quality, while participating in architecture discussions, influencing the product roadmap, and taking ownership of new projects. The ideal candidate for this role should possess proficiency in Python and PySpark, a deep understanding of Apache Spark, experience with big data technologies such as HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, and Presto, and familiarity with distributed database systems. Experience working with various file formats like Parquet, Avro, and NoSQL databases, as well as AWS and GCP, is preferred. A minimum of 5 years of professional experience as a data or software engineer is required for this full-time position. If you are a self-starter who is passionate about data engineering, ready to work with big data technologies, and eager to collaborate with a team of engineers while mentoring others, we encourage you to apply for this exciting opportunity at Lifesight.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

You have experience in ETL testing and are familiar with Agile methodology. With a minimum of 4-6 years of testing experience in test planning & execution, you possess working knowledge in Database testing. It would be advantageous if you have prior experience in the auditing domain. Your strong application analysis, troubleshooting, and behavioral skills along with extensive experience in manual testing will be valuable. While experience in Automation scripting is not mandatory, it would be beneficial. You are adept at leading discussions with Business, Development, and vendor teams for testing activities such as Defect Coordinator and test scenario reviews. Your excellent verbal and written communication skills enable you to effectively communicate with various stakeholders. You are capable of working independently and collaboratively with onshore and offshore teams. The role requires an experienced ETL developer with proficiency in Big Data technologies like Hadoop. Key Skills Required: - Hadoop (Horton Works), HDFS - Hive, Pig, Knox, Ambari, Ranger, Oozie - TALEND, SSIS - MySQL, MS SQL Server, Oracle - Windows, Linux Being open to working in 2nd shifts (1pm - 10pm) is essential for this role. Your excellent English communication skills will be crucial for effective collaboration. If you are interested, please share your profile on mytestingcareer.com. When responding, kindly include your Current CTC, Expected CTC, Notice Period, Current Location, and Contact number.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Zinnia is the leading technology platform for accelerating life and annuities growth, simplifying the experience of buying, selling, and administering insurance products. Our success is driven by a commitment to three core values: be bold, team up, deliver value. With over $180 billion in assets under administration, serving 100+ carrier clients, 2500 distributors, and partners, Zinnia enables more people to protect their financial futures. We are looking for an experienced Data Engineer to join our data engineering team. Your role will involve designing, building, and optimizing robust data pipelines and platforms that power our analytics, products, and decision-making. You will collaborate with data scientists, analysts, product managers, and other engineers to deliver scalable, efficient, and reliable data solutions. Your responsibilities will include designing, developing, and maintaining scalable big data pipelines using Spark (Scala or PySpark), Hive, and HDFS. You will also build and manage data workflows and orchestration using Airflow, write efficient production-grade code in languages like Python, Java, or Scala, and develop complex SQL queries for data transformation and reporting. Additionally, you will work on cloud platforms like AWS to deploy and manage data infrastructure and collaborate with data stakeholders to deliver high-quality data solutions. To be successful in this role, you should have strong experience with the Big Data stack, excellent programming skills, expertise in SQL, hands-on experience with Spark tuning and optimization, and familiarity with Airflow for data workflow orchestration. A degree in Computer Science, Engineering, or a related field, along with at least 5 years of experience as a Data Engineer, is required. You should also have a proven track record of delivering production-ready data pipelines in big data environments and possess strong analytical thinking, problem-solving, and communication skills. Preferred or nice-to-have skills include knowledge of the AWS ecosystem, experience with Trino or Presto for interactive querying, familiarity with Lakehouse formats, exposure to DBT for analytics engineering, experience with Kafka for streaming ingestion, and familiarity with monitoring tools like Prometheus and Grafana. Joining our team as a Data Engineer will provide you with the opportunity to work on cutting-edge technologies, collaborate with a diverse group of professionals, and contribute to impactful projects that shape the future of insurance technology.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Sykatiya Technology Pvt Ltd is a leading Semiconductor Industry innovator committed to leveraging cutting-edge technology to solve complex problems. We are currently looking for a highly skilled and motivated Data Scientist to join our dynamic team and contribute to our mission of driving innovation through data-driven insights. As the Lead Data Scientist and Machine Learning Engineer at Sykatiya Technology Pvt Ltd, you will play a crucial role in analyzing large datasets to uncover patterns, develop predictive models, and implement AI/ML solutions. Your responsibilities will include working on projects involving neural networks, deep learning, data mining, and natural language processing (NLP) to drive business value and enhance our products and services. Key Responsibilities: - Lead the design and implementation of machine learning models and algorithms to address complex business problems. - Utilize deep learning techniques to enhance neural network models and enhance prediction accuracy. - Conduct data mining and analysis to extract actionable insights from both structured and unstructured data. - Apply natural language processing (NLP) techniques for advanced text analytics. - Develop and maintain end-to-end data pipelines, ensuring data integrity and reliability. - Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. - Mentor and guide junior data scientists and engineers in best practices and advanced techniques. - Stay updated with the latest advancements in AI/ML, neural networks, deep learning, data mining, and NLP. Technical Skills: - Proficiency in Python and its libraries such as NumPy, pandas, sci-kit-learn, TensorFlow, Keras, and PyTorch. - Strong understanding of machine learning algorithms and techniques. - Extensive experience with neural networks and deep learning frameworks. - Hands-on experience with data mining and analysis techniques. - Proficiency in natural language processing (NLP) tools and libraries like NLTK, spaCy, and transformers. - Proficiency in Big Data Technologies including Sqoop, Hadoop, HDFS, Hive, and PySpark. - Experience with Cloud Platforms such as AWS services like S3, Step Functions, EventBridge, Athena, RDS, Lambda, and Glue. - Strong knowledge of Database Management systems like SQL, Teradata, MySQL, PostgreSQL, and Snowflake. - Familiarity with Other Tools like ExactTarget, Marketo, SAP BO, Agile, and JIRA. - Strong Analytical Skills to analyze large datasets and derive actionable insights. - Excellent Problem-Solving Skills with the ability to think critically and creatively. - Effective Communication Skills and teamwork abilities to collaborate with various stakeholders. Experience: - At least 8 to 12 years of experience in a similar role.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Senior Engineer at Impetus Technologies, you will play a crucial role in designing, developing, and deploying scalable data processing applications using Java and Big Data technologies. Your responsibilities will include collaborating with cross-functional teams, mentoring junior engineers, and contributing to architectural decisions to enhance system performance and scalability. Your key responsibilities will revolve around designing and maintaining high-performance applications, implementing data ingestion and processing workflows using frameworks like Hadoop and Spark, and optimizing existing applications for improved performance and reliability. You will also be actively involved in mentoring junior engineers, participating in code reviews, and staying updated with the latest technology trends in Java and Big Data. To excel in this role, you should possess a strong proficiency in Java programming language, hands-on experience with Big Data technologies such as Apache Hadoop and Apache Spark, and an understanding of distributed computing concepts. Additionally, you should have experience with data processing frameworks and databases, strong problem-solving skills, and excellent communication and teamwork abilities. In this role, you will collaborate with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. The team environment encourages knowledge sharing, continuous learning, and regular technical workshops to enhance your skills and keep you updated with industry trends. Overall, as a Senior Engineer at Impetus Technologies, you will be responsible for designing and developing scalable Java applications for Big Data processing, ensuring code quality and performance, and troubleshooting and optimizing existing systems to enhance performance and scalability. Qualifications: - Strong proficiency in Java programming language - Hands-on experience with Big Data technologies such as Hadoop, Spark, and Kafka - Understanding of distributed computing concepts - Experience with data processing frameworks and databases - Strong problem-solving skills - Knowledge of version control systems and CI/CD pipelines - Excellent communication and teamwork abilities - Bachelor's or master's degree in Computer Science, Engineering, or related field preferred Experience: 7 to 10 years Job Reference Number: 13131,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Engineer Intmd Analyst is an intermediate level position responsible for a variety of engineering activities including the design, acquisition, and development of hardware, software, and network infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks. Responsibilities: - Provide assistance with a product or product component development within the technology domain - Conduct product evaluations with vendors and recommend product customization for integration with systems - Assist with training activities, mentor junior team members, and ensure teams" adherence to all control and compliance initiatives - Assist with application prototyping and recommend solutions around implementation - Provide third-line support to identify the root cause of issues and react to systems and application outages or networking issues - Support projects and provide project status updates to project manager or Sr. Engineer - Partner with development teams to identify engineering requirements and assist with defining application/system requirements and processes - Create installation documentation, training materials, and deliver technical training to support the organization - Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 5-8 years of relevant experience in an Engineering role - Experience working in Financial Services or a large complex and/or global environment - Involvement in DevOps activities (SRE/LSE Auto Deployment/Self Healing) and Application Support Tech Stack: Basic - Java/python, Unix, Oracle Essential Skills: - IT experience working in one of Hbase, HDFS, Kafka, Neo4J, Akka, Spark, Storm, and GemFire - IT Support experience working in Unix, Cloud & Windows environments - Experience supporting RDBMS DB like MongoD, ORACLE, Sybase, MS SQL & DB2 - Supported Applications deployed in Websphere, Weblogic, IIS, and Tomcat - Familiar with Autosys and setup - Understanding of client-server architecture (clustered and non-clustered) - Basic Networking knowledge (Load balancers, Network Protocols) - Working knowledge of Lookup Active Directory Protocol(LDAP) and Single Sign On concepts - Service Now expertise - Experience working in Multiple Application Support Model is preferred Other Essential Attributes: - Consistently demonstrates clear and concise written and verbal communication - Comprehensive knowledge of design metrics, analytics tools, benchmarking activities, and related reporting to identify best practices - Demonstrated analytic/diagnostic skills - Ability to work in a matrix environment and partner with virtual teams - Ability to work independently, prioritize, and take ownership of various parts of a project or initiative - Ability to work under pressure and manage tight deadlines or unexpected changes in expectations or requirements - Proven track record of operational process change and improvement Education: - Bachelors degree/University degree or equivalent experience Job Family Group: - Technology Job Family: - Systems & Engineering Time Type: - Full time Most Relevant Skills: Please see the requirements listed above.,

Posted 1 month ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As an experienced professional, you will be responsible for implementing and supporting Hadoop platform-based applications to efficiently store, retrieve, and process terabytes of data. Your expertise will contribute to the seamless functionality of the system. To excel in this role, you should possess 4+ years of core Java or 2+ years of Python experience. Additionally, having at least 1+ years of working experience with the Hadoop stack including HDFS, MapReduce, HBase, and other related technologies is desired. A solid background of 1+ years in database management with MySQL or equivalent systems will be an added advantage. If you meet the qualifications and are enthusiastic about this opportunity, we encourage you to share your latest CV with us at data@scalein.com or reach out to us via our contact page. Your contribution will be pivotal in driving the success of our data management initiatives.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies