Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
The role requires strategic thinking and planning and provide expertise throughout the entire product development life cycle with a strong knowledge of SAS Viya programming, API architecture , Kubernetes , Risk and Finance domain. Also the role requires ownership, making sure that quality is baked in from the start Key Responsibilities N/A Skills and Experience Hands on in Python - Data Manipulation- Pandas, Nunphy, SAS developer framework and expert in SAS development work Desirable skills on SAS admin Framework. Desirable skills on Hadoop development Framework. Sound statistical knowledge, analytical and problem-solving skills are desirable. Good to have knowledge on Big data technologies (Hortonworks HDP, Apache Hadoop, HDFS, Hive, Sqoop, Flume, Zookeeper and HBase, Oozie, Spark, Ni-Fi, Kafka, Snap logic, AWS, Red shift). Have experience with monitoring tools. Development capabilities using python, spark, sas, R languages. Good management and analytical skill Good writing and oral communication skills Good understanding of and experience in projects (e. g. SDLC, Agile methodology) Desirable skills in Big Data space (Hadoop Stack like HDFS, Pig, Hive, HBase, Sqoop etc) Ability to debug & write / modify Shell script/Python Willing to work on-call support over weekends Liaise with multiple application teams & co-ordinate for issue resolution Good analytical & interaction skills Qualifications N/A Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. 26272
Posted 2 months ago
3.0 - 6.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Job Role Strong Spark programming experience with Java Good knowledge of SQL query writing and shell scripting Experience working in Agile mode Analyze, Design, develop, deploy and operate high-performant and high-quality services that serve users in a cloud environment. Good understanding of client eco system and expectations In charge of code reviews, integration process, test organization, quality of delivery Take part in development. Experienced into writing queries using SQL commands. Experienced with deploying and operating the codes in the cloud environment. Experienced in working without much supervision. Your Profile Primary Skill Java, Spark, SQL Secondary Skill/Good to have Hadoop or any cloud technology, Kafka, or BO. What youll love about working hereShort Description Choosing Capgemini means having the opportunity to make a difference, whether for the worlds leading businesses or for society. It means getting the support you need to shape your career in the way that works for you. It means when the future doesnt look as bright as youd like, you have the opportunity to make changeto rewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensive Learning & Development programs. With us, you will experience an inclusive, safe, healthy, and flexible work environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in our Corporate Social Responsibility and Sustainability initiatives. And whilst you make a difference, you will also have a lot of fun.
Posted 2 months ago
4.0 - 9.0 years
5 - 8 Lacs
Gurugram
Work from Office
RARR Technologies is looking for HADOOP ADMIN to join our dynamic team and embark on a rewarding career journey. Responsible for managing the day-to-day administrative tasks Provides support to employees, customers, and visitors Responsibilities:1 Manage incoming and outgoing mail, packages, and deliveries 2 Maintain office supplies and equipment, and ensure that they are in good working order 3 Coordinate scheduling and meetings, and make arrangements for travel and accommodations as needed 4 Greet and assist visitors, and answer and direct phone calls as needed Requirements:1 Experience in an administrative support role, with a track record of delivering high-quality work 2 Excellent organizational and time-management skills 3 Strong communication and interpersonal skills, with the ability to interact effectively with employees, customers, and visitors 4 Proficiency with Microsoft Office and other common office software, including email and calendar applications
Posted 2 months ago
6.0 - 10.0 years
11 - 15 Lacs
Pune
Work from Office
We at Onix Datametica Solutions Private Limited are looking for Bigdata Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytic s including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike. Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators Job Description 6+ years of overall experience in developing, testing implementing Big data projects using Hadoop, Spark, Hive Hands-on experience playing lead role in Big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members Experience in setting up Hadoop services, implementing ETL/ELT pipelines, working with Terabytes of data ingestion processing from varied systems Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing HDD LDD documents Required Skills and Abilities: Mandatory Skills Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting) Experience implementing CICD pipelines and working experience with tools like SCM tools such as GIT, Bit bucket, etc Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and SQL, experience in implementing SCD type 1 2, auditing, exception handling mechanism Data Warehousing projects implementation with either, Scala or Hadoop programming background Proficient with various development methodologies like waterfall, agile/scrum Exceptional communication, organisation, and time management skills Collaborative approach to decision-making Strong analytical skills Good To Have - Certifications in any of GCP, AWS or Azure, Cloud era Work on multiple Projects simultaneously, prioritising appropriately
Posted 2 months ago
5.0 years
7 Lacs
Hyderabad
Work from Office
Design, implement, and optimize Big Data solutions using Hadoop and Scala. You will manage data processing pipelines, ensure data integrity, and perform data analysis. Expertise in Hadoop ecosystem, Scala programming, and data modeling is essential for this role.
Posted 2 months ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
The Big Data (Scala, HIVE) role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Big Data (Scala, HIVE) domain.
Posted 2 months ago
2.0 - 4.0 years
4 - 6 Lacs
Chennai
Work from Office
The Big Data (Scala, HIVE) role involves working with relevant technologies, ensuring smooth operations, and contributing to business objectives. Responsibilities include analysis, development, implementation, and troubleshooting within the Big Data (Scala, HIVE) domain.
Posted 2 months ago
1.0 - 4.0 years
1 - 5 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 2 months ago
5.0 - 8.0 years
4 - 8 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop. Experience5-8 Years.
Posted 2 months ago
3.0 - 6.0 years
9 - 14 Lacs
Mumbai
Work from Office
Role Overview : We are looking for aTalend Data Catalog Specialistto drive enterprise data governance initiatives by implementingTalend Data Catalogand integrating it withApache Atlasfor unified metadata management within a Cloudera-based data lakehouse. The role involves establishing metadata lineage, glossary harmonization, and governance policies to enhance trust, discovery, and compliance across the data ecosystem Key Responsibilities: o Set up and configure Talend Data Catalog to ingest and manage metadata from source systems, data lake (HDFS), Iceberg tables, Hive metastore, and external data sources. o Develop and maintain business glossaries , data classifications, and metadata models. o Design and implement bi-directional integration between Talend Data Catalog and Apache Atlas to enable metadata synchronization , lineage capture, and policy alignment across the Cloudera stack. o Map technical metadata from Hive/Impala to business metadata defined in Talend. o Capture end-to-end lineage of data pipelines (e.g., from ingestion in PySpark to consumption in BI tools) using Talend and Atlas. o Provide impact analysis for schema changes, data transformations, and governance rule enforcement. o Support definition and rollout of enterprise data governance policies (e.g., ownership, stewardship, access control). o Enable role-based metadata access , tagging, and data sensitivity classification. o Work with data owners, stewards, and architects to ensure data assets are well-documented, governed, and discoverable. o Provide training to users on leveraging the catalog for search, understanding, and reuse. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6–12 years in data governance or metadata management, with at least 2–3 years in Talend Data Catalog. Talend Data Catalog, Apache Atlas, Cloudera CDP, Hive/Impala, Spark, HDFS, SQL. Business glossary, metadata enrichment, lineage tracking, stewardship workflows. Hands-on experience in Talend–Atlas integration , either through REST APIs, Kafka hooks, or metadata bridges. Preferred technical and professional experience .
Posted 2 months ago
3.0 - 7.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Preferred technical and professional experience Good communication skill. India market experience is preferred.
Posted 2 months ago
2.0 - 5.0 years
4 - 7 Lacs
Chennai
Work from Office
Experience: 2+ yrs of experience in IT, with At least 1+ years of experience with cloud and system administration. At least 2 years of experience with and strong understanding of 'big data' technologies in Hadoop ecosystem Hive, HDFS, Map/Reduce, Flume, Pig, Cloudera, HBase Sqoop, Spark etc. Job Overview: Smartavya Analytica Private Limited is seeking an experienced Hadoop Administrator to manage and support our Hadoop ecosystem. The ideal candidate will have strong expertise in Hadoop cluster administration, excellent troubleshooting skills, and a proven track record of maintaining and optimizing Hadoop environments. Key Responsibilities: • Install, configure, and manage Hadoop clusters, including HDFS, YARN, Hive, HBase, and other ecosystem components. Monitor and manage Hadoop cluster performance, capacity, and security. Perform routine maintenance tasks such as upgrades, patching, and backups. Implement and maintain data ingestion processes using tools like Sqoop, Flume, and Kafka. Ensure high availability and disaster recovery of Hadoop clusters. Collaborate with development teams to understand requirements and provide appropriate Hadoop solutions. Troubleshoot and resolve issues related to the Hadoop ecosystem. Maintain documentation of Hadoop environment configurations, processes, and procedures. Requirement: • Experience in Installing, configuring and tuning Hadoop distributions. Hands on experience in Cloudera. Understanding of Hadoop design principals and factors that affect distributed system performance, including hardware and network considerations. Provide Infrastructure Recommendations, Capacity Planning, work load management. Develop utilities to monitor cluster better Ganglia, Nagios etc. Manage large clusters with huge volumes of data Perform Cluster maintenance tasks Create and removal of nodes, cluster monitoring and troubleshooting Manage and review Hadoop log files Install and implement security for Hadoop clusters Install Hadoop Updates, patches and version upgrades. Automate the same through scripts Point of Contact for Vendor escalation. Work with Hortonworks in resolving issues Should have Conceptual/working knowledge of basic data management concepts like ETL, Ref/Master data, Data quality, RDBMS Working knowledge of any scripting language like Shell, Python, Perl Should have experience in Orchestration & Deployment tools. Academic Qualification:
Posted 2 months ago
6.0 - 10.0 years
10 - 16 Lacs
Mumbai
Work from Office
Responsibilities Design and Implement Big Data solutions, complex ETL pipelines and data modernization projects. Required Past Experience: 6+ years of overall experience in developing, testing & implementing big data projects using Hadoop, Spark, Hive and Sqoop. Hands-on experience playing lead role in big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members. Experience in setting up Hadoop services, implementing Extract transform and load/Extract load and transform (ETL/ELT) pipelines, working with Terabytes/Petabytes of data ingestion & processing from varied systems Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing High-Level Design & Low-Level Design (HDD & LDD) documents. Required Skills and Abilities: Mandatory Skills Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting) Secondary Skills Apache Kafka, Storm, Distributed systems, good understanding of networking, security (platform & data) concepts, Kerberos, Kubernetes Understanding of Data Governance concepts and experience implementing metadata capture, lineage capture, business glossary Experience implementing Continuous integration/Continuous delivery (CI/CD) pipelines and working experience with tools like Source code management (SCD) tools such as GIT, Bit bucket, etc. Ability to assign and manage tasks for team members, provide technical guidance, work with architects on High-Level Design, Low-Level Design (HDD & LDD) and Proof of concept. Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and sql, experience in implementing slowly changing dimension (SCD) type 1 & 2, auditing, exception handling mechanism Data Warehousing projects implementation with either Java, or Scala based Hadoop programming background. Proficient with various development methodologies like waterfall, agile/scrum. Exceptional communication, organization, and time management skills Collaborative approach to decision-making & Strong analytical skills Good To Have - Certifications in any of GCP, AWS or Azure, Cloudera' Work on multiple Projects simultaneously, prioritizing appropriately
Posted 2 months ago
10.0 - 15.0 years
25 - 35 Lacs
Pune
Work from Office
Education and Qualifications • Bachelors degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent. Work Experience • Minimum 10 years of experience in data analytics field Minimum 6 years of experience in running operation and support in Cloud Data Lakehouse environment Experience with Azure Databricks Experience in building and optimizing data pipelines, architectures and data sets Excellent experience in Scala or Python Ability to troubleshoot and optimize complex queries on the Spark platform Knowledgeable on structured and unstructured data design / modeling, data access and data storage techniques Experience with DevOps tools and environment Technical / Professional Skills Please provide at least 3 • Azure Databricks Python / Scala / Java HIVE / HBase / Impala / Parquet Sqoop, Kafka, Flume SQL and RDBMS Airflow Jenkins / Bamboo Github / Bitbucket Nexus Have you worked in sizing clusters for Databricks in Azure cloud environment? Have you done hand-on configuration and administration of Databricks platform on Azure Cloud? Have you experience in cluster management, storage management, workspace management, key management etc? Have you done cost optimization exercises to reduce the consumption cost of Databricks clusters? Have you done cost forecasting of Databricks platform on Azure Cloud? How you do monitor cost anomaly, identify cost driver and come up with recommendation? Have you done any RBAC configuration in Databricks platform on Azure Cloud? Have you configured connectivity from Databricks to internal/external sources/applications such as Power BI, Google Analytics, SharePoint etc What have you implemented/how do you monitor the health of Databricks Platform, its services, the health of ETL pipeline and the end-points What kind of proactive or self-healing process are put in place to ensure service availability?
Posted 2 months ago
5 - 8 years
5 - 9 Lacs
Bengaluru
Work from Office
Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
5 - 8 years
6 - 10 Lacs
Pune
Work from Office
About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Big Data. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 months ago
4 - 8 years
14 - 18 Lacs
Bengaluru
Work from Office
As a Software Engineer, you will work closely with cross-functional teams to understand business requirements, design scalable solutions, and ensure the integrity and availability of our data The ideal candidate will have a deep understanding of cloud technologies, UI technologies, software engineering best practices, and a proven track record of successfully delivering complex projects - Lead the design and implementation of cloud-based data architectures - Collaborate with data scientists, analysts, and business stakeholders to understand requirements - Stay current with industry trends and emerging technologies in cloud engineering B.Tech. Degree in computer science or equivalent field Hands-on programming experience Experience with React frontend framework, deep understanding of React.js, and Redux Proficient in programming languages such as Python, Java, Scala, GoLang, JavaScript Proficiency in cloud services such as AWS, Azure, or Google Cloud Expertise in building UI and data integration services Experience with streaming UI technologies Experience building data streaming solutions using Apache Spark/ Apache Storm/ Flink /Flume Preferred Qualifications Knowledge of data warehouse solutions (Redshift, BigQuery, Snowflake, Druid) Certification in cloud platforms Knowledge of machine learning and data science concepts Contributions to the open source community
Posted 2 months ago
8 - 13 years
50 - 55 Lacs
Bengaluru
Work from Office
As an Engineering Manager, you will lead a team of engineers responsible for the development andimplementation of our cloud-based data infrastructure. You will work closely with cross-functionalteams to understand data requirements, design scalable solutions, and ensure the integrity andavailability of our data. The ideal candidate will have a deep understanding of cloud technologies,data engineering best practices, and a proven track record of successfully delivering complex data projects. Key Responsibilities include:- Hire, develop, and retain top engineering talent- Build and nurture self-sustained, high-performing teams- Provide mentorship and technical guidance to engineers, fostering continuous learning anddevelopment- Lead the design, development, and deployment of scalable cloud-based data infrastructure and applications- Drive end-to-end execution of complex data engineering projects- Partner with Data Scientists, ML Engineers, and business stakeholders to understand data needs and translate them into scalable engineering solutions- Align technical strategy with business goals through effective communication and collaboration- Implement and enforce best practices for data security, privacy, and compliance with regulatory standards- Optimize data storage, processing, and retrieval for improved performance and cost efficiency.- Continuously evaluate and improve the system architecture and workflows- Stay current with emerging trends and technologies in cloud data engineering- Recommend and adopt tools, frameworks, and platforms that enhance productivity and reliability Bachelor s degree in Computer Science or a related field Minimum 8 years of experience in software development with at least 2 years in a technical leadership or management role. Proven experience as a Full stack developer, with a focus on cloud platforms. Proficient in programming languages such as Python. Strong hands-on expertise with Python frameworks (Django, Flask, or FastAPI, RESTful APIs), React.js and modern JavaScript Experience with authentication and authorization (OAuth, JWT) Strong understanding of cloud services, preferably AWS & Experience in building cloud native platforms using containerization technologies like Kubernetes, docker, helm Preferred Qualifications Knowledge of data warehouse solutions (BigQuery, Snowflake, Druid) and Big Data technologies such as Spark, Kafka, Hive, Iceberg, Trino, Flink. Experience with big data technologies (Hadoop, Spark, etc.). Experience with streaming data technologies (Kafka, Kinesis). Experience building data streaming solutions using Apache Spark / Apache Storm / Flink / Flume. Familiarity with machine learning pipelines is an added advantage. Proven ability to deliver complex, high-scale systems in a production environment. Strong people management and cross-functional collaboration skills.
Posted 2 months ago
3 - 6 years
8 - 12 Lacs
Bengaluru
Work from Office
As a Software Engineer, you will work closely with cross-functional teams to understand business requirements, design scalable solutions, and ensure the integrity and availability of our data. The ideal candidate will have a deep understanding of cloud technologies, UI technologies, software engineering best practices, and a proven track record of successfully delivering complex projects.- Lead the design and implementation of cloud-based data architectures.- Collaborate with data scientists, analysts, and business stakeholders to understand requirements.- Stay current with industry trends and emerging technologies in cloud engineering. Bachelors degree in computer science or equivalent field with of 3 - 6 years hands-on programming experience Proven experience as a Full Stack engineer, with a focus on cloud platforms Strong proficiency in cloud services such as AWS, Azure, or Google Cloud Expertise in building UI and data integration services Proficient in programming languages such as Python, Java, Scala, GoLang, JavaScript Knowledge of data warehouse solutions (Redshift, BigQuery, Snowflake, Druid) Experience with streaming UI technologies Experience building data streaming solutions using Apache Spark, Storm, Link, Flume Preferred Qualifications Certification in cloud platforms Knowledge of machine learning and data science concepts Contributions to the open source community
Posted 2 months ago
7 - 11 years
50 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France