Jobs
Interviews

5402 Hive Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools,techniques, and products to translate system requirements into the design anddevelopment of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation for Java, Springboot, API, Microservices, Security Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities Include, But Not Limited To Strong desire to grow a career as a Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Experience in the areas: statistical modeling, feature extraction and analysis, supervised/unsupervised/semi-supervised learning. Exposure to the semiconductor industry is a plus but not a requirement. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Strong software development skills. Strong verbal and written communication skills. Experience with or desire to learn: Machine learning and other advanced analytical methods Fluency in Python and/or R pySpark and/or SparkR and/or SparklyR Hadoop (Hive, Spark, HBase) Teradata and/or another SQL databases Tensorflow, and/or other statistical software including scripting capability for automating analyses SSIS, ETL Javascript, AngularJS 2.0, Tableau Experience working with time-series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Experience working with Manufacturing Execution Systems (MES) is a plus Existing papers from CVPR, NIPS, ICML, KDD, and other key conferences are plus, but this is not a research position About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 1 week ago

Apply

3.0 years

4 Lacs

Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role Refer to responsibilities You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit -Perform other miscellaneous duties as required by management -Driving CI culture, implementing CI projects and innovation for withing the team -Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. -Develop and maintain ETL processes to load data into our big data platform. -Optimize Spark jobs and queries to improve performance and reduce processing time. -Working with product teams to communicate and translate needs into technical requirements. -Design and develop monitoring tools and processes to ensure data quality and availability. -Collaborate with other teams to integrate data processing pipelines into larger systems. -Delivering high quality code and solutions, bringing solutions into production. -Performing code reviews to optimise technical performance of data pipelines. -Continually look for how we can evolve and improve our technology, processes, and practices. -Leading group discussions on system design and architecture. -Manage and coach individuals, providing regular feedback and career development support aligned with business goals. -Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. -Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. -Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. -Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python • 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation • Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. • Experience with ETL processes and data modelling. • Problem-solving and troubleshooting skills. • Working knowledge on Oozie/Airflow. • Experience in writing unit test cases, shell scripting. • Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Java Developer, you will be responsible for analyzing, designing, programming, debugging, and modifying software enhancements and/or new products used in various computer programs. Your expertise in Java, Spring MVC, Spring Boot, Database design, and query handling will be utilized to write code, complete programming, and perform testing and debugging of applications. You will work on local, networked, cloud-based, or Internet-related computer programs, ensuring the code meets the necessary standards for commercial or end-user applications such as materials management, financial management, HRIS, mobile apps, or desktop applications products. Your role will involve working with RESTful Web Services/Microservices for JSON creation, data parsing/processing using batch and stream mode, and messaging platforms like Kafka, Pub/Sub, ActiveMQ, among others. Proficiency in OS, Linux, virtual machines, and open source tools/platforms is crucial for successful implementation. Additionally, you will be expected to have an understanding of data modeling and storage with NoSQL or relational DBs, as well as experience with Jenkins, Containerized Microservices deployment in Cloud environments, and Big Data development (Spark, Hive, Impala, Time-series DB). To excel in this role, you should have a solid understanding of building Microservices/Webservices using Java frameworks, REST API standards and practices, and object-oriented analysis and design patterns. Experience with cloud technologies like Azure, AWS, and GCP will be advantageous. A candidate with Telecom domain experience and familiarity with protocols such as TCP, UDP, SNMP, SSH, FTP, SFTP, Corba, SOAP will be preferred. Additionally, being enthusiastic about work, passionate about coding, a self-starter, and proactive will be key qualities for success in this position. Strong communication, analytical, and problem-solving skills are essential, along with the ability to write quality/testable/modular code. Experience in Big Data platforms, participation in Agile Development methodologies, and working in a start-up environment will be beneficial. Team leading experience is an added advantage, and immediate joiners will be given special priority. If you possess the necessary skills and experience, have a keen interest in software development, and are ready to contribute to a dynamic team environment, we encourage you to apply for this role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Join our dynamic Workforce Planning (WFP) team within Consumer and Community (CCB) Operations division and be part of a forward-thinking organization that leverages data science to optimize workforce efficiency. Contribute to innovative projects that drive impactful solutions for Chase's operations. As an Operations Research Analyst within the WFP Data Science team, you will tackle complex and high-impact projects. Your responsibilities will include designing and developing optimization models and simulation models, supporting OR projects either individually or as part of a team, and collaborating with stakeholders to understand business requirements and define solution objectives clearly. It is crucial to identify and select the correct method to solve problems while staying up to date on the latest OR methodologies and ensuring the robustness of any mathematical solution. Additionally, you will be expected to develop and communicate recommendations and OR solutions in an easy-to-understand manner, leveraging data to tell a story. Your role will involve leading and persuading others positively to influence team efforts and help frame business problems into technical problems with feasible solutions. The ideal candidate should possess a Master's Degree with 4+ years or a Doctorate (PhD) with 2+ years of experience in Operations Research, Industrial Engineering, Systems Engineering, Financial Engineering, Management Science, or related disciplines. You should have experience supporting OR projects with multiple team members, hands-on experience developing simulation models, optimization models, and/or heuristics, a deep understanding of the mathematics and theory behind Operations Research techniques, and proficiency in Open Source Software (OSS) programming languages like Python, R, or Scala. Experience with commercial solvers like GUROBI, CPLEX, XPRESS, or MOSEK, as well as familiarity with basic data table operations (SQL, Hive, etc.), is required. Demonstrated relationship-building skills and the ability to make things happen through positive influence are essential. Preferred qualifications include advanced expertise with Operations Research techniques, prior experience building Reinforcement Learning Models, extensive knowledge of Stochastic Modelling, and previous experience leading highly complex cross-functional technical projects with multiple stakeholders.,

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this position should possess a strong expertise in programming/scripting languages and a proven ability to debug challenges across various Operating Systems. A certification in the relevant specialization is required along with a proficiency in using designing and automation tools. In addition, the candidate should have excellent knowledge of CI and agile frameworks. Moreover, the successful candidate must demonstrate strong communication, negotiation, networking, and influencing skills. Stakeholder management and conflict management skills are also essential for this role. The candidate should be proficient in setting up tools/infrastructure, defect metrics, and traceability metrics. A solid understanding of CI practices and agile frameworks is necessary. Furthermore, the candidate should be able to promote a strategic mindset to ensure the use of the right tools and coach and mentor the team to follow best practices. Expertise in Big Data and Hadoop ecosystems is required, along with the ability to build real-time stream-processing systems on large Scala data. Proficiency in data ingestion frameworks/data sources and data structures is also crucial for this role. The profile required for this position includes 10+ years of expertise and hands-on experience in Spark with Scala and Big data technologies. The candidate should have a good working experience in Scala and object-oriented concepts, as well as in HDFS, Spark, Hive, and Oozie. Technical expertise with data models, data mining, and partitioning techniques is also necessary. Additionally, hands-on experience with SQL databases and a good understanding of CI/CD tools such as Maven, Git, Jenkins, and SONAR are required. Knowledge of Kafka and ELK stack is a plus, and familiarity with data visualization tools like PowerBI will be an added advantage. Strong communication and coordination skills with multiple stakeholders are essential, along with the ability to assess existing situations, propose improvements, and follow up on action plans. In conclusion, the ideal candidate should have a professional attitude, be self-motivated, a fast learner, and a team player. The ability to work in international/intercultural environments and interact with onsite stakeholders is crucial for this role. If you are looking to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis, and develop or strengthen your expertise, you will find a perfect fit in this position.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

Join GlobalLogic as a valuable member of the team working on a significant software project for a world-class company that provides M2M / IoT 4G/5G modules to industries such as automotive, healthcare, and logistics. Your engagement will involve contributing to the development of end-user modules" firmware, implementing new features, maintaining compatibility with the latest telecommunication and industry standards, and analyzing and estimating customer requirements. Requirements - BA / BS degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience. - Proficiency in Cloud SQL and Cloud Bigtable. - Experience with Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub / Sub, and Genomics. - Familiarity with Google Transfer Appliance, Cloud Storage Transfer Service, and BigQuery Data Transfer. - Knowledge of data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and data processing algorithms (MapReduce, Flume). - Previous experience working with technical customers. - Proficiency in writing software in languages like Java or Python. - 6-10 years of relevant consulting, industry, or technology experience. - Strong problem-solving and troubleshooting skills. - Excellent communication skills. Job Responsibilities - Hands-on experience working with data warehouses, including technical architectures, infrastructure components, ETL / ELT, and reporting / analytic tools. - Experience in technical consulting. - Proficiency in architecting and developing software or internet-scale Big Data solutions in virtualized environments like Google Cloud Platform (mandatory) and AWS / Azure (good to have). - Familiarity with big data, information retrieval, data mining, machine learning, and building high availability applications with modern web technologies. - Working knowledge of ITIL and / or agile methodologies. - Google Data Engineer certification. What We Offer - Culture of caring: Prioritize a culture of caring, where people come first, fostering an inclusive environment of acceptance and belonging. - Learning and development: Commitment to continuous learning and growth, offering various programs, training curricula, and hands-on opportunities for personal and professional advancement. - Interesting & meaningful work: Engage in impactful projects that allow for creative problem-solving and exploration of new solutions. - Balance and flexibility: Embrace work-life balance with diverse career areas, roles, and work arrangements to support personal well-being. - High-trust organization: Join a high-trust organization with a focus on integrity, trustworthiness, and ethical practices. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. Join the team in transforming businesses and industries through intelligent products, platforms, and services, contributing to cutting-edge solutions that shape the world today.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced professional with over 7 years of experience in application or production support looking to join our Production/Application support team. You possess a blend of strong technical skills in Unix, SQL, and Big Data technologies along with domain expertise in financial services such as securities, secured financing, rates, liquidity reporting, derivatives, front office/back-office systems, and trading lifecycle. Your key responsibilities will include providing L2 production support for mission-critical liquidity reporting and financial applications, ensuring high availability and performance. You will be monitoring and resolving incidents related to trade capture, batch failures, position keeping, market data, pricing, risk, and liquidity reporting. Additionally, you will proactively manage alerts, logs, and jobs using Autosys, Unix tools, and monitoring platforms like ITRS/AWP. In this role, you will be executing advanced SQL queries and scripts for data analysis, validation, and issue resolution. You will also be supporting multiple applications built on stored procedures, SSIS, SSRS, Big Data ecosystems (Hive, Spark, Hadoop), and troubleshooting data pipeline issues. It will be your responsibility to maintain and improve knowledge bases, SOPs, and runbooks for production support while actively participating in change management and release activities, including deployment validations. You will take the lead in root cause analysis (RCA), conduct post-incident reviews, and drive permanent resolutions. Collaboration with infrastructure teams on capacity, performance, and system resilience initiatives will be crucial. Your contribution to continuous service improvement, stability management, and automation initiatives will be highly valued. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Information Technology, Engineering, or a related field. A minimum of 7 years of experience in application or production support, with at least 2 years at an advanced level, is required. Your hands-on experience with Unix/Linux scripting, file manipulation, job control, SQL (MSSQL/Oracle or similar), stored procedures, SSIS, SSRS, Big Data technologies (Hadoop, Hive, Spark), job schedulers like Autosys, and log analysis tools will be essential. Solid understanding of financial instruments and trade lifecycle, knowledge of front office/back office and reporting workflows and operations, excellent analytical and problem-solving skills, effective communication, stakeholder management skills, and experience with ITIL processes are also key requirements for this role. If you meet these qualifications and are looking to join a dynamic team, we encourage you to apply.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Background Praan (Praan, Inc.) is an impact focused deep-tech startup democratizing clean air using breakthrough filterless technology. The company is backed by top tier VCs and CXOs globally and currently operates between the United States and India. Our team puts extreme attention to detail and loves building technology that's aspirational. Praan's team and culture is positioned to empower people to solve large global problems at an accelerated pace. Why Everyone worries about the dooms-day in climate change which is expected to occur in the 2050s. However, there's one doom's day which is the reality for millions of people around the world today. Air pollution takes more than 7 Million lives globally every single year. Over 5% of premature children death occur due to air pollution in developing countries. Everyone has relied on governments or experts to solve the problem, but most solutions up until today have either been too expensive or too ineffective. Praan is an attempt at making the future cleaner, healthier, and safer for the generations to come. Job Description Supervise, monitor, and coordinate all production activities across the HIVE and MKII assembly lines Ensure adherence to daily, weekly, and monthly production targets while maintaining product quality and minimizing downtime Implement and sustain Kaizen, 5S, and other continuous improvement initiatives to enhance line efficiency and reduce waste Overlook daily start of day and end of day inventory reporting Ensure line balancing for optimal resource utilization and minimal bottlenecks Monitor and manage manpower deployment, shift scheduling, absentee management and skill mapping to maintain productivity Drive quality standards by coordinating closely with Manufacturing Lead Track and analyze key production KPIs (OEE, yield, downtime) and initiate corrective actions Ensure adherence to SOPs, safety protocols, and compliance standards Support new product introductions (NPIs) or design changes in coordination with R&D/engineering teams Train and mentor line operators and line leaders, ensuring training, skill development, and adherence to performance standards. Monitor and report on key production metrics, including output, downtime, efficiency, scrap rates, and productivity, ensuring targets are met consistently Maintain documentation and reports related to production planning, line output, incidents, and improvements Skill Requirements Diploma/Bachelor's degree in Mechanical, Production, Electronics, Industrial Engineering, or related field 4–8 years of hands-on production supervision experience in a high-volume manufacturing environment managing the production of multiple products Proven expertise in Kaizen, Lean Manufacturing, Line Balancing, and Shop Floor Management Proven ability to manage large teams, allocate resources effectively, and meet production targets in a fast-paced, dynamic environment Experience with production planning, manpower management, and problem-solving techniques (like 5 Why, Fishbone, etc.) Strong understanding of manufacturing KPIs and process documentation Excellent leadership, communication, and conflict-resolution skills Hands-on attitude with a willingness to work on-ground Experience in automotive, consumer electronics, or similar high-volume industries Praan is an equal opportunity employer and does not discriminate based on race, religion, caste, gender, disability or any other criteria. We just care about working with great human beings!

Posted 1 week ago

Apply

3.0 years

0 Lacs

Delhi, Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 1 week ago

Apply

2.0 - 4.0 years

25 - 30 Lacs

Pune

Work from Office

Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

1.0 - 4.0 years

25 - 30 Lacs

Thane

Work from Office

Bachelor s or master s degree in computer science, Data Science, Engineering, or a related field. EsyCommerce is seeking a highly experienced Data Engineer to join our growing team in either Mumbai or Pune. This role requires a strong foundation in data engineering principles, coupled with experience in application development and data science techniques. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and applications, as well as leveraging analytical skills to transform data into valuable insights. This position calls for a blend of technical expertise, problem-solving abilities, and effective communication skills to drive data-driven solutions that meet business objectives.

Posted 1 week ago

Apply

6.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary As a DevOps Engineer, following primary and secondary skills are required. Strategy Adhere to technology roadmap for CEE Hive delivery Adopt the bank’s technology strategy and drive within our programs/ projects. Guide new ideas through the ideation and design process to ensure they are sufficiently defined to address and meet strategic goals. Build strong relationship with production support teams and SRE Ensure tech Obsolescence across CEE Hive is remediated with no impact to stability. Business Manage good relationship with stakeholders in Development, Quality Assurance, PSS, Technology Services and Architecture teams. Work with Development team and develop ADO CI/CD pipeline, deploy applications in non-production environments Work with Quality Assurance Function and Non-Functional teams and ensure deployments are completed in non-production (SIT, UAT, Regression and PT) within half-a-day Grant access in RBAC based on tickets raised by various teams Ensure Certificates in CEE hive applications are update to date and renewal of certificate 1 month before expiry Processes Adhere to ADO and Bank defined principles and guidelines on all Program delivery. Compliance on ICS guidelines, Security and Data protection Compliant to SDF/SIA process and drive bank towards automating process areas removing redundancies. Compliant SCB Group code of conduct and standards Compliance on CCIB Design and architecture guidelines, Data governance and policies Key Responsibilities People & Talent Be the face of the bank to your teams and communicate on things happening in department, bank, locale to drive the best results Be the first person to learn modern technologies proposed by SCB and implement the same in existing application CI/CD pipelines Risk Management Identity and highlight risks in the process to Squad Lead Governance Must be aware of the Group’s regulatory framework and adhere to it. Must understand the oversight and controls related to Business Unit, Job Function and deliver. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board of [insert name of entities] Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders CEE Hive ITO, Solution Architect, Development teams, QA teams, SRE / PSS, Vendors related to CLDM and Interfacing systems Other Responsibilities Embed Here for good and Group’s brand and values in ; Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures; Multiple functions (double hats). Skills And Experience PRIMARY SKILLS 6 to 8 years of experience in DevOps Hands-on experience in Ansible for automating software provisioning, configuration management, and application deployments Hands-on experience in windows PowerShell scripting to automate deployments in Windows OS Strong OS fundamentals and hands-on skills in ANY - Linux/Unix/Windows Excellent python/bash scripting fundamentals High proficiency with application containerization and cluster management (docker, Kubernetes, OpenShift) Experience in scalability, failover, high-availability, memory, IO and CPU profiling Hands-on experience in installation/configuration/administration in any web servers and J2EE compliant servers In-depth knowledge of build/release systems and hands-on experience in developing and managing CI/CD pipelines – Bitbucket, Jenkins, Sonarqube, artifactory, App Scans tools Hands-on experience in implementing monitoring tools in ANY - AppDynamics, sysdig, Elasticsearch, Grafana, prometheus Understanding common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP) Hands-on experience on any major cloud platforms (AWS, Azure) Hands-on experience in implementing infrastructure-as-a-code with terraform Hands-on experience in supporting and manage database deployments (Oracle, postgres) Secondary Skills Hands-on experience in Kubernetes Internals and Administration. Experience with OpenShift Platform (Deployments, Objects creation, Storage, Kube Administration) Strong Observability Skills (Logging, Monitoring, Troubleshooting, Alert Notifications related aspects) – EFK/ELK Stack, Prometheus/Grafana Dashboard creation and visualization of metrics Qualifications SKILLS AND COMPETENCIES Ansible Kubernetes and Helm Chart Unix Shell Scripting PowerShell Scripting Core Java SQL in one of the databases (Oracle, Postgres, MySQL, etc) Python Any Web / App Server Azure DevOps CI/CD pipeline Git Jenkins ElasticSearch / LogStash / Kibana Grafana Prometheus About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description To work on Data Analytical to triage and investigate data quality and data pipeline exceptions and reporting issues. Requirements This role will support Data Operations and Reporting related projects but also will be helping with other projects as well if needed. In this role, you will leverage your strong analytical skills to triage and investigate data quality and data pipeline exceptions and reporting issues. The ideal candidate should be able to work independently and actively engage other functional teams as needed. This role requires researching transactions and events using large amounts of data. Technical Experience/Qualifications: At least 5 years of experience in software development At least 5 years of SQL experience in any RDBMS Minimum 5 years of experience in Python Strong analytical and problem-solving skill Strong communication skill Strong experience with data modeling Strong experience in data analysis and reporting. Experience with version control tools such as GitHub etc. Experience with shell scripting and Linux Knowledge of agile and scrum methodologies Preferred experience in Hive SQL or related technologies such as Big Query etc. Preferred experience in Big data technologies like Hadoop, AWS/GCP, S3, HIVE, Impala, HDFS, Spark, MapReduce Preferred experience in reporting tools such as Looker or Tableau etc. Preferred experience in finance and accounting but not required Job responsibilities Responsibilities: Develop SQL queries as per technical requirements Investigate and fix day to day data related issues Develop test plan and execute test script Data validation and analysis Develop new reports/dashboard as per technical requirements Modify existing reports/dashboards for bug fixes and enhancements Develop new ETL scripts and modify existing in case of bug fixes and enhancements Monitoring of ETL processes and fix issues in case of failure Monitor scheduled jobs and fix issues in case of failure Monitor data quality alerts and act on it What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5+ years of Proven experience in developing and managing Big data solutions using Apache Spark, Scala is must. Having strong hold on Spark-core, Spark-SQL & Spark Streaming Strong programming skills in Scala, Java, or Python. Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc. Proficiency in SQL and experience with relational (Oracle/PL-SQL) . Experience in working on Kafka, JMS / MQ applications. Experience in working multiple OS (Unix, Linux, Win) Familiarity with data warehousing concepts and ETL processes. Experience in performance tuning of large technical solutions with significant volumes Knowledge of data modeling, data architecture, and data integration techniques. Knowledge on best practices for data security, privacy, and compliance. Experience with JAVA (Core Java, J2EE, Spring Boot Restful Services), Web services (REST, SOAP), XML, Java Script, Micro services, SOA etc. Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem. Experience with developing frameworks and utility services including logging/monitoring. Experience delivering high quality software following continuous delivery and using code quality tools (JIRA, GitHub, Jenkin, Sonar, etc.). Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark Profound knowledge implementing to different data storage solutions such as RDMBS(Oracle), Hive, HBase, Impala and NO SQL databases. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

5.0 - 8.0 years

1 - 6 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hello Connections , Exciting Opportunity Alert !! We're on the hunt for passionate individuals to join our dynamic team as Data Engineer Job Profile : Data Engineers Experience : Minimum 5 to Maximum 8 Yrs of exp Location : Chennai / Pune / Mumbai / Hyderabad / Bangalore Mandatory Skills : Big Data | Hadoop | SCALA | spark | sparkSql | Hive Qualification : B.TECH / B.E / MCA / Computer Science Background - Any Specification How to Apply? Send your CV to: sipriyar@sightspectrum.in Contact Number - 6383476138 Don't miss out on this amazing opportunity to accelerate your professional career! #bigdata #dataengineer #hadoop #spark #python #hive #pysaprk

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary We are seeking an Apache Hadoop - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Spark-based data processing systems. This role involves hands-on experience in Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming. If you’re passionate about working in fast-paced, dynamic environments and want to be part of the cutting edge of data solutions, this role is for you. We’re Looking For Someone Who Can Design and optimize distributed Spark-based applications, ensuring low-latency, high-throughput performance for big data workloads. Troubleshooting: Provide expert-level troubleshooting for any data or performance issues related to Spark jobs and clusters. Data Processing Expertise: Work extensively with large-scale data pipelines using Spark's core components (Spark SQL, DataFrames, RDDs, Datasets, and structured streaming). Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Spark jobs to reduce processing time and resource consumption. Cluster Management: Collaborate with DevOps and infrastructure teams to manage Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.). Real-time Data: Design and implement real-time data processing solutions using Apache Spark Streaming or Structured Streaming. This role requires flexibility to work in rotational shifts, based on team coverage needs and customer demand. Candidates should be comfortable supporting operations in a 24x7 environment and willing to adjust working hours accordingly. What Makes You The Right Fit For This Position Expert in Apache Spark: In-depth knowledge of Spark architecture, execution models, and the components (Spark Core, Spark SQL, Spark Streaming, etc.) Data Engineering Practices: Solid understanding of ETL pipelines, data partitioning, shuffling, and serialization techniques to optimize Spark jobs. Big Data Ecosystem: Knowledge of related big data technologies such as Hadoop, Hive, Kafka, HDFS, and YARN. Performance Tuning and Debugging: Demonstrated ability to tune Spark jobs, optimize query execution, and troubleshoot performance bottlenecks. Experience with Cloud Platforms: Hands-on experience in running Spark clusters on cloud platforms such as AWS, Azure, or GCP. Containerization & Orchestration: Experience with containerized Spark environments using Docker and Kubernetes is a plus. Good To Have Certification in Apache Spark or related big data technologies. Experience working with Acceldata's data observability platform or similar tools for monitoring Spark jobs. Demonstrated experience with scripting languages like Bash, PowerShell, and Python. Familiarity with concepts related to application, server, and network security management. Possession of certifications from leading Cloud providers (AWS, Azure, GCP), and expertise in Kubernetes would be significant advantages.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Admissions Associate Company: Hive School Location: Gurgaon (On-site) Experience: 0-2 years Employment Type: Internship (2 months) → Full-time About Hive School: India's first specialized sales school helping professionals transition into high-paying B2B SaaS sales careers. Featured on Shark Tank India with 90% placement rate and packages ranging 15-30 LPA. Role Overview: Join our admissions team to guide prospective students through their career transformation journey. You'll be the first point of contact for professionals looking to break into or advance in B2B SaaS sales. Key Responsibilities: Conduct discovery calls with prospective students to understand career goals and challenges Guide candidates through the application and selection process Match student profiles with appropriate program tracks (Online/Offline PGP) Maintain detailed records of all candidate interactions and progress Support admissions events, mixers, and information sessions Collaborate with the curriculum team to ensure program-candidate fit Follow up with applicants and manage the entire admissions funnel Provide insights on candidate trends and market feedback Requirements: 0-2 years of professional experience (fresh graduates welcome) Strong passion for education and career development Basic understanding of SaaS business models (or willingness to learn quickly) Excellent communication and interpersonal skills Operations-focused mindset with attention to detail Quick learner who can adapt in a fast-paced startup environment Comfortable with technology and CRM tools Empathetic approach to understanding student needs What We Offer: 2-month internship period for mutual evaluation Transition to full-time role based on performance Competitive fixed salary + performance-based commissions ESOP participation once you demonstrate value creation Direct exposure to building India's first sales school Learning opportunities in education, sales, and startup operations Mentorship from industry leaders and founders Application Process: Submit resume with a brief note on why you're interested in education Initial screening call Case study discussion Final interview with founders

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers

Posted 1 week ago

Apply

6.0 - 8.0 years

4 - 7 Lacs

Chennai

On-site

Job ID: 35040 Location: Chennai, IN Area of interest: Technology Job type: Regular Employee Work style: Office Working Opening date: 26 Jul 2025 Job Summary As a DevOps Engineer, following primary and secondary skills are required. Strategy Adhere to technology roadmap for CEE Hive delivery Adopt the bank’s technology strategy and drive within our programs/ projects. Guide new ideas through the ideation and design process to ensure they are sufficiently defined to address and meet strategic goals. Build strong relationship with production support teams and SRE Ensure tech Obsolescence across CEE Hive is remediated with no impact to stability. Business Manage good relationship with stakeholders in Development, Quality Assurance, PSS, Technology Services and Architecture teams. Work with Development team and develop ADO CI/CD pipeline, deploy applications in non-production environments Work with Quality Assurance Function and Non-Functional teams and ensure deployments are completed in non-production (SIT, UAT, Regression and PT) within half-a-day Grant access in RBAC based on tickets raised by various teams Ensure Certificates in CEE hive applications are update to date and renewal of certificate 1 month before expiry Processes Adhere to ADO and Bank defined principles and guidelines on all Program delivery. Compliance on ICS guidelines, Security and Data protection Compliant to SDF/SIA process and drive bank towards automating process areas removing redundancies. Compliant SCB Group code of conduct and standards Compliance on CCIB Design and architecture guidelines, Data governance and policies Key Responsibilities People & Talent Be the face of the bank to your teams and communicate on things happening in department, bank, locale to drive the best results Be the first person to learn modern technologies proposed by SCB and implement the same in existing application CI/CD pipelines Risk Management Identity and highlight risks in the process to Squad Lead Governance Must be aware of the Group’s regulatory framework and adhere to it. Must understand the oversight and controls related to Business Unit, Job Function and deliver. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board of [insert name of entities] Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders CEE Hive ITO, Solution Architect, Development teams, QA teams, SRE / PSS, Vendors related to CLDM and Interfacing systems Other Responsibilities Embed Here for good and Group’s brand and values in ; Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures; Multiple functions (double hats). Skills and Experience PRIMARY SKILLS 6 to 8 years of experience in DevOps Hands-on experience in Ansible for automating software provisioning, configuration management, and application deployments Hands-on experience in windows PowerShell scripting to automate deployments in Windows OS Strong OS fundamentals and hands-on skills in ANY - Linux/Unix/Windows Excellent python/bash scripting fundamentals High proficiency with application containerization and cluster management (docker, Kubernetes, OpenShift) Experience in scalability, failover, high-availability, memory, IO and CPU profiling Hands-on experience in installation/configuration/administration in any web servers and J2EE compliant servers In-depth knowledge of build/release systems and hands-on experience in developing and managing CI/CD pipelines – Bitbucket, Jenkins, Sonarqube, artifactory, App Scans tools Hands-on experience in implementing monitoring tools in ANY - AppDynamics, sysdig, Elasticsearch, Grafana, prometheus Understanding common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP) Hands-on experience on any major cloud platforms (AWS, Azure) Hands-on experience in implementing infrastructure-as-a-code with terraform Hands-on experience in supporting and manage database deployments (Oracle, postgres) SECONDARY SKILLS Hands-on experience in Kubernetes Internals and Administration. Experience with OpenShift Platform (Deployments, Objects creation, Storage, Kube Administration) Strong Observability Skills (Logging, Monitoring, Troubleshooting, Alert Notifications related aspects) – EFK/ELK Stack, Prometheus/Grafana Dashboard creation and visualization of metrics Qualifications SKILLS AND COMPETENCIES Ansible Kubernetes and Helm Chart Unix Shell Scripting PowerShell Scripting Core Java SQL in one of the databases (Oracle, Postgres, MySQL, etc) Python Any Web / App Server Azure DevOps CI/CD pipeline Git Jenkins ElasticSearch / LogStash / Kibana Grafana Prometheus About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www.sc.com/careers

Posted 1 week ago

Apply

12.0 - 15.0 years

5 - 8 Lacs

Chennai

On-site

Job ID: 35024 Location: Chennai, IN Area of interest: Technology Job type: Regular Employee Work style: Office Working Opening date: 18 Jul 2025 Job Summary The Senior Manager in Software Development for the banking domain will lead and manage a team of developers to deliver high-quality software solutions. The role requires extensive experience in Java/J2EE, Spring, Spring Boot, Microservices, API, DevOps, and Hibernate. The Senior Manager will be responsible for overseeing the entire software development lifecycle, ensuring adherence to Agile and Hive model processes, and maintaining compliance with industry standards. This position demands excellent communication, problem-solving skills, and the ability to interact positively with stakeholders. Key Responsibilities Strategy Develop and implement software development strategies aligned with the organization's goals and objectives. Drive innovation and continuous improvement in software development practices. Ensure the adoption of best practices and emerging technologies in the banking domain. Business Collaborate with business stakeholders to understand their requirements and translate them into technical solutions. Ensure that software solutions meet business needs and deliver value to the organization. Support business growth by developing scalable and robust software applications. Processes Oversee the entire software development lifecycle, from requirement gathering to deployment and maintenance. Ensure adherence to Agile and Tribe model processes, including sprint planning, daily stand-ups, and retrospectives. Maintain clear and comprehensive documentation throughout the development process. People & Talent Lead, mentor, and develop a team of software developers, fostering a culture of collaboration and continuous learning. Conduct performance reviews, provide feedback, and identify opportunities for professional development. Attract and retain top talent in the software development field. Risk Management Identify and mitigate risks associated with software development projects. Ensure compliance with industry standards and regulatory requirements. Implement robust testing and quality assurance processes to deliver error-free software. Governance Establish and enforce coding standards, development guidelines, and best practices. Ensure that all software development activities are aligned with the organization's governance framework. Monitor and report on the progress of software development projects to senior management. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead to achieve the outcomes set out in the Bank’s Conduct Principles Effectively and collaboratively identify, escalate, mitigate, and resolve risk, conduct and compliance matters Key stakeholders Our stakeholders include Product Owners (PO), Ops team, Business Partners, Engineering Lead (EL), Sub Domain Tech Leads (SDTL), Chapter Leads (CL), Production Support Teams, Process & Audit Teams, Integration Team, and Surround Interfacing Systems, such as upstream and downstream teams. Qualifications Qualification: Bachelor’s or Master’s Degree Experience: 12 to 15 years Skill: Software Development Life Cycle (SDLC) Skills Preference: Relevant Skills Certifications Skills and Experience Java/J2EE, Spring, Spring Boot, Microservices, API, Hibernate Code Development: Eclipse, IntelliJ. Oracle or SQL Server PL/SQL development and data model design Microservices architecture and Web services, API design and Development. DevOps Tools and CI/CD Processes: GitFlow, BitBucket. Collaboration: GitHub, JIRA. Continuous build, integration & deployment concept and tools: Maven, SonarQube, Jenkins, and RunDeck Swagger/RAML, automated test configuration, Cloud Architecture & Design and Containers Management: Docker. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www.sc.com/careers

Posted 1 week ago

Apply

7.0 years

5 - 6 Lacs

Chennai

On-site

7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies