Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Mobileum is a leading provider of Telecom analytics solutions for roaming, core network, security, risk management, domestic and international connectivity testing, and customer intelligence. More than 1,000 customers rely on its Active Intelligence platform, which provides advanced analytics solutions, allowing customers to connect deep network and operational intelligence with real-time actions that increase revenue, improve customer experience, and reduce costs. Headquartered in Silicon Valley, Mobileum has global offices in Australia, Dubai, Germany, Greece, India, Portugal, Singapore and UK with global HC of 1800+. Join Mobileum Team At Mobileum we recognize that our team is the main reason for our success. What does work with us mean? Opportunities! Role: Lead/Sr Lead- Bigdata Reports to: Manager/Director Responsibilities & Deliverables: This person would be responsible for development, support and maintenance of complex projects to build and enhance best-in-class network security solutions. Job Description: As a Lead, you will be responsible for This person would be responsible for leading the design, development, and implementation of complex projects to build and enhance best-in-class cutting-edge solutions. Design and development of Roaming Engineering products. Technical Leadership: Define and drive the technical vision, architecture, and engineering standards. Stay ahead of new technologies and drive innovation Solution Design: Lead the development of scalable, high-performance, and resilient systems. Enforce coding standards, conduct code reviews, write and run comprehensive integration tests to deliver high-quality products. Performance Optimization: Identify and resolve performance bottlenecks in software applications. Gather feedback from stakeholders about improvement in code-stack and feature-sets. Adopt agile practice to track and update status of assigned tasks/stories. Qualifications: Bachelor’s degree in computer science or equivalent. Skill Set: 6+ years of overall experience with Java development experience in wireless/telecom/VoIP/data networking domain. Solid knowledge of Hadoop/HDFS and cloud native architecture . Knowledge of NoSQL and distributed databases . Experience in building and designing microservices and cloud native applications. Good knowledge of spark, Java and bigdata technologies Experience in Kubernetes (K3S, Openshift) and Docker. Utilize version control systems like Git for managing code repositories, branching, and collaboration. Strong trouble shooting capabilities targeting complicated problems in remote system. Experience in developing or designing highly available/redundant software. Experience in developing or designing telecommunications software is plus. Location: Bangalore Show more Show less
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
This position allows the successful incumbent to develop deep technical and broad commercial skills by being exposed to, and working on a wide variety of internal projects. Working as part of a large, sophisticated analytics team, this role will be required to: Manipulate and analyse data for the development and validation of predictive models in the areas of credit risk, demographics, marketing, fraud and ratings, in distributed computing environment Monitor Equifax’s cutting edge scorecards and risk tools Create business case and analytic insight materials that show the value of predictive modelling and visualise the insights Conduct ad hoc queries such as bureau insights, testing solutions Lead small projects and quality check own and others’ outputs guide junior data scientists Maintain documentation that supports Analytics platform of databases, data products and analytical solutions. What You’ll Do: Project Leadership / Mentoring Mentor Analysts through development process Driving a focus on learning so that the team benefits from your coaching and other learning events that contribute to improved performance. Develop and maintain a network of influence. Demonstrate leadership by example. Data Analysis Leading statistical and data analysis, sometimes as project team leader. Demonstrate analytic leadership in the approaches used and proactively provide technical guidance and support to analysts. Demonstrate a high level of skill in programming required to perform analysis such as demonstrated experience with SQL, R, Python and Tableau on Cloud environments such as GCP or equivalent. Process and analyse large volumes of data in the development of insights. Prepare documentation for all work, to ensure that an audit trail is sufficient for a colleague to be able to quality review and/or repeat your analysis. Quality check your own and other analyst work output to ensure error-free delivery of information and analysis. Develop extensive repertoire of analytical methodologies and techniques for investigating data relationships and insights Adhere to Equifax project management standards and effective use of project management resources (methodology, templates, time recording systems and project office). Product and Service Contributing to analytic roadmap, product development and innovation Develop a detailed understanding of the full product and service offering available through Equifax as well as the market dynamics and requirements within the data driven marketing space. Proactively use this understanding to work with the team and stakeholders to enhance and expand Equifax’s data and insights assets. What Experience You Need BS degree in a STEM major or equivalent discipline 5-7 years of experience in a related analytical role Proven track record of designing and developing predictive models in real-world applications Experience with model performance evaluation and predictive model optimization for accuracy and efficiency Cloud certification strongly preferred Additional role-based certifications may be required depending upon region/BU requirements Process Improvement and Efficiencie Drive transition to more advanced modelling environments, utilising distributed computing and methodologies such as machine learning Demonstrate an understanding of business needs, making recommendations relating to new or improved data and insights assets. Supporting the other teams within the Analytics function by working with them to improve processes as needed. Systems and Processes Develop a detailed understanding of Equifax databases, data structures and core data analysis procedures, as well as maximising output through a Hadoop based file system. Develop understanding of best practice model management framework to ensure Equifax’s models remain optimal in terms of performance and stability Develop familiarity with Equifax’s documented project management methodology and resources (templates, time recording system, work scheduling etc.). What Could Set You Apart Passion for data science, data mining, machine learning and experience with big data architectures and methods A Master's degree in a quantitative field (Statistics, Mathematics, Economics) Cloud certification such as GCP strongly preferred Self Starter Excellent communicator / Client Facing Ability to work in fast paced environment Flexibility work across A/NZ time zones based on project needs Show more Show less
Posted 1 week ago
2.0 - 5.0 years
4 - 8 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA Senior GenAI Data Engineer We are seeking an experienced Senior Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What you'll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Requirements: Bachelors degree in computer science, Engineering, or related fields (Master's recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs Location: Delhi or Bangalore Workplace type : Hybrid Working
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments Snowflake Architect Key Responsibilities: Solution Design: Designing the overall data architecture within Snowflake, including database/schema structures, data flow patterns (ELT/ETL strategies involving Snowflake), and integration points with other systems (source systems, BI tools, data science platforms). Data Modeling: Designing efficient and scalable physical data models within Snowflake. Defining table structures, distribution/clustering keys, data types, and constraints to optimize storage and query performance. Security Architecture: Designing the overall security framework, including the RBAC strategy, data masking policies, encryption standards, and how Snowflake security integrates with broader enterprise security policies. Performance and Scalability Strategy: Designing solutions with performance and scalability in mind. Defining warehouse sizing strategies, query optimization patterns, and best practices for development teams. Ensuring the architecture can handle future growth in data volume and user concurrency. Cost Optimization Strategy: Designing architectures that are inherently cost-effective. Making strategic choices about data storage, warehouse usage patterns, and feature utilization (e.g., when to use materialized views, streams, tasks). Technology Evaluation and Selection: Evaluating and recommending specific Snowflake features (e.g., Snowpark, Streams, Tasks, External Functions, Snowpipe) and third-party tools (ETL/ELT, BI, governance) that best fit the requirements. Standards and Governance: Defining best practices, naming conventions, development guidelines, and governance policies for using Snowflake effectively and consistently across the organization. Roadmap and Strategy: Aligning the Snowflake data architecture with overall business intelligence and data strategy goals. Planning for future enhancements and platform evolution. Technical Leadership: Providing guidance and mentorship to developers, data engineers, and administrators working with Snowflake. Key Skills: Deep understanding of Snowflake's advanced features and architecture. Strong data warehousing concepts and data modeling expertise. Solution architecture and system design skills. Experience with cloud platforms (AWS, Azure, GCP) and how Snowflake integrates. Expertise in performance tuning principles and techniques at an architectural level. Strong understanding of data security principles and implementation patterns. Knowledge of various data integration patterns (ETL, ELT, Streaming). Excellent communication and presentation skills to articulate designs to technical and non-technical audiences. Strategic thinking and planning abilities. Looking for 12+ years of experience to join our team. Skills Snowflake,Data modeling,Cloud platforms,Solution architecture Show more Show less
Posted 1 week ago
1.0 - 3.0 years
3 - 5 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs
Posted 1 week ago
3.0 - 7.0 years
12 - 17 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA We are seeking an experienced Data Architect to join our team in designing and delivering innovative data solutions to clients. The successful candidate will be responsible for architecting, developing, and implementing data management solutions and data architectures for various industries. This role requires strong technical expertise, excellent problem-solving skills, and the ability to work effectively with clients and internal teams to design and deploy scalable, secure, and efficient data solutions. What you'll be doing We are seeking an experienced Data Architect to join our team in designing and delivering innovative data solutions to clients. The successful candidate will be responsible for architecting, developing, and implementing data management solutions and data architectures for various industries. This role requires strong technical expertise, excellent problem-solving skills, and the ability to work effectively with clients and internal teams to design and deploy scalable, secure, and efficient data solutions. Experience and Leadership: Proven experience in data architecture, with a recent role as a Lead Data Solutions Architect, or a similar senior position in the field. Proven experience in leading architectural design and strategy for complex data solutions and then overseeing their delivery. Experience in consulting roles, delivering custom data architecture solutions across various industries. Architectural Expertise: Strong expertise in designing and overseeing delivery of data streaming and event-driven architectures, with a focus on Kafka and Confluent platforms. In-depth knowledge in architecting and implementing data lakes and lakehouse platforms, including experience with Databricks and Unity Catalog. Proficiency in conceptualising and applying Data Mesh and Data Fabric architectural patterns. Experience in developing data product strategies, with a strong inclination towards a product-led approach in data solution architecture. Extensive familiarity with cloud data architecture on platforms such as AWS, Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python or R is beneficial. Exposure to ETL/ ELT processes, SQL, NoSQL databases is a nice-to-have, providing a well-rounded background. Experience with data visualization tools and DevOps principles/tools is advantageous. Familiarity with machine learning and AI concepts, particularly in how they integrate into data architectures. Design and Lifecycle Management: Proven background in designing modern, scalable, and robust data architectures. Comprehensive grasp of the data architecture lifecycle, from concept to deployment and consumption. Data Management and Governance: Strong knowledge of data management principles and best practices, including data governance frameworks. Experience with data security and compliance regulations (GDPR, CCPA, HIPAA, etc.) Leadership and Communication: Exceptional leadership skills to manage and guide a team of architects and technical experts. Excellent communication and interpersonal skills, with a proven ability to influence architectural decisions with clients and guide best practices Project and Stakeholder Management: Experience with agile methodologies (e.g. SAFe, Scrum, Kanban) in the context of architectural projects. Ability to manage project budgets, timelines, and resources, maintaining focus on architectural deliverables. Location: Delhi or Bangalore Workplace type : Hybrid Working
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly experienced Senior Data Software Engineer to join our dynamic team and tackle challenging projects that will enhance your skills and career. As a Senior Engineer, your contributions will be critical in designing and implementing Data solutions across a variety of projects. The ideal candidate will possess deep experience in Big Data and associated technologies, with a strong emphasis on Apache Spark, Python, Azure and AWS. Responsibilities Develop and execute end-to-end Data solutions to meet complex business needs Work collaboratively with interdisciplinary teams to comprehend project needs and deliver superior software solutions Apply your expertise in Apache Spark, Python, Azure and AWS to create scalable and efficient data processing systems Maintain and enhance the performance, security, and scalability of Data applications Keep abreast of industry trends and technological advancements to foster continuous improvement in our development practices Requirements 5-8 years of direct experience in Data and related technologies Advanced knowledge and hands-on experience with Apache Spark High-level proficiency with Hadoop and Hive Proficiency in Python Prior experience with AWS and Azure native Cloud data services Technologies Hadoop Hive Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Lead Java Developer with Big Data Technologies Experience Location: Hyderabad Office Employment Mode: Hybrid - 3 Days/Week Job Overview: We are seeking a highly skilled Senior Java Developer with 10 years of experience to join our dynamic team. The ideal candidate will have deep expertise in Java, Big Data, Spring, Kafka, AWS, Scala, and Spark . You will be responsible for designing, developing, and optimizing scalable applications while working with cutting-edge technologies in a fast-paced environment. Key Responsibilities: Design, develop, and maintain high-performance, scalable applications using Java, Scala, and Big Data technologies . Implement and manage real-time data processing solutions using Kafka and Spark . Develop robust backend services using Spring Boot and microservices architecture . Deploy and manage applications in AWS cloud environments . Optimize application performance and ensure high availability and reliability. Collaborate with cross-functional teams to gather requirements and deliver technical solutions. Lead and mentor junior developers, ensuring best practices and coding standards. Troubleshoot and resolve complex technical issues in distributed systems. Required Skills & Qualifications: 10+ years of professional experience in software development. Expertise in Java, Spring Boot, and microservices architecture . Strong experience in Big Data technologies (Hadoop, Spark, Kafka, etc.). Proficiency in Scala for data-intensive applications. Hands-on experience with AWS services (EC2, S3, Lambda, EMR, etc.). Strong understanding of distributed systems and real-time data processing. Experience with CI/CD pipelines and DevOps practices. Excellent problem-solving skills and ability to work in a collaborative team environment. Nice-to-Have: Experience with containerization tools such as Docker and Kubernetes . Exposure to machine learning and data science technologies. Familiarity with NoSQL databases like Cassandra or MongoDB . Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration, Ansible on Microsoft Azure Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
9 - 16 Lacs
Hyderabad
Work from Office
Job Title: Big Data Engineer Java & Spark Location: Hyderabad Work Mode: Onsite (5 days a week) Experience: 5 to 10 Years Job Summary: We are hiring an experienced Big Data Engineer with strong expertise in Java , Apache Spark , and Big Data technologies . You will be responsible for designing and implementing scalable data pipelines that support real-time and batch processing for data-driven applications. Key Responsibilities: Develop and maintain scalable batch and streaming data pipelines using Java and Apache Spark Work with Hadoop , Hive , Kafka , and HDFS to manage and process large datasets Collaborate with data analysts, scientists, and other engineering teams to understand data requirements Optimize Spark jobs and ensure performance and reliability in production Maintain data quality, governance, and security best practices Required Skills: 510 years of hands-on experience in data engineering or related roles Strong programming skills in both Java Expertise in Apache Spark for data processing and transformation Good understanding of Big Data frameworks : Hadoop, Hive, Kafka, HDFS Experience with distributed systems and large-scale data processing Familiarity with cloud platforms such as AWS, GCP, or Azure Good to Have: Experience with workflow orchestration tools like Airflow or NiFi Knowledge of containerization (Docker, Kubernetes) Exposure to CI/CD pipelines and version control (e.g., Git) Education: Bachelors or Masters degree in Computer Science, Engineering, or related field Why Join Us: Be part of a high-impact data engineering team Work on modern data platforms with the latest open-source tools Strong tech culture with career growth opportunities
Posted 1 week ago
18.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
LinkedIn is the world’s largest professional network, built to help members of all backgrounds and experiences achieve more in their careers. Our vision is to create economic opportunity for every member of the global workforce. Every day our members use our products to make connections, discover opportunities, build skills and gain insights. We believe amazing things happen when we work together in an environment where everyone feels a true sense of belonging, and that what matters most in a candidate is having the skills needed to succeed. It inspires us to invest in our talent and support career growth. Join us to challenge yourself with work that matters. As part of our world-class software engineering team, you will be charged with building the next-generation infrastructure and platforms for LinkedIn, including but not limited to: an application and service delivery platform, massively scalable data storage and replication systems, cutting-edge search platform, best-in-class AI platform, experimentation platform, privacy and compliance platform etc. You will drive the technology vision and create business impact by putting to use your passion for very large-scale distributed systems and software as services and your passion for writing code that performs at an extreme scale. LinkedIn has already pioneered well-known open-source infrastructure projects like Apache Kafka, Pinot, Azkaban, Samza, Venice, Datahub, Feather, etc. We also work with industry standard open-source infrastructure products like Kubernetes, GRPC and GraphQL - come join our infrastructure teams and share the knowledge with a broader community while making a real impact within our company. At LinkedIn, we trust each other to do our best work where it works best for us and our teams. This role offers a hybrid work option, meaning you can both work from home and commute to a LinkedIn office, depending on what’s best for you and when it is important for your team to be together. Responsibilities: - You will build and ship software, drive architectural decisions and implementation across the BigData ecosystem that includes data processing pipelines, pubsub systems, observability, compliance, etc. - You will design the right interfaces/APIs for long term evolution of the services. - You will be responsible for the high availability, reliability, scalability and performance of these systems - You will actively improve the level of craftsmanship at LinkedIn by developing best practices and defining best strategies. - You will invest in growing junior engineers, developing design and craftsmanship skills and culture. - You will lead cross team/cross functional discussions and drive alignment on product/technology strategy. You will work closely with other stakeholders in HeadQuarters to drive intra-team alignment. - You will be a primary domain expert to influence technology choices. Basic Qualifications: - BA/BS Degree or higher in Computer Science or related technical discipline, or related practical experience - 18+ years of programming experience in Java, C++or similar programming languages - 18+ years' experience building large-scale distributed systems, applications or similar experience Preferred Qualifications: - MS or PhD in Computer Science or related technical discipline. - 18+ years of relevant work experience. - Experience with strategic planning, executing on a long-term roadmap, team development and scaling processes to support business growth. - Designing and building infrastructure and backend services at internet scale. - Building and shipping high quality work while achieving high reliability. - Utilizing data and analysis in articulating technical problems and in arriving at recommendations and solutions. - Leading high-impact cross-org initiatives - Expertise in BigData systems like Apache Hadoop, Spark, Iceberg etc is highly desirable Suggested Skills: - BigData technologies - Large-scale distributed systems - Communication You will Benefit from our Culture: We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels. India Disability Policy LinkedIn is an equal employment opportunity employer offering opportunities to all job seekers, including individuals with disabilities. For more information on our equal opportunity policy, please visit https://legal.linkedin.com/content/dam/legal/Policy_India_EqualOppPWD_9-12-2023.pdf Global Data Privacy Notice for Job Candidates This document provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Role: Technology Lead Location: Bangalore / Mangalore Type: Full-Time Why MResult? Founded in 2004, MResult is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. MResult’s expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges. Website: https://mresult.com/ LinkedIn: https://www.linkedin.com/company/mresult/ What We Offer: At MResult, you can leave your mark on projects at the world’s most recognized brands, access opportunities to grow and upskill, and do your best work with the flexibility of hybrid work models. Great work is rewarded, and leaders are nurtured from within. Our values — Agility, Collaboration, Client Focus, Innovation, and Integrity — are woven into our culture, guiding every decision. What This Role Requires As a Technology Lead , you will play a critical role in driving innovation, delivering scalable digital solutions, and ensuring client success. You will be responsible for leading cross-functional teams, managing technical delivery, stepping in as a solution architect, technical consultant. You will occasionally support the business development team as a pre-sales partner for solutioning. You will work on diverse technology initiatives and contribute to both business development and project execution, blending your technical expertise with strategic thinking. Key Skills to Succeed in This Role: Minimum 5 years of experience in technology management or a senior technical role (Architect/Tech Lead/Consultant). Proven team leadership experience—managing cross-functional technical teams and ensuring successful project delivery. Strong background in technical delivery and client-facing roles, including solutioning and stakeholder management. Prior involvement in technology sales, pre-sales, or consulting engagements, including scoping, effort estimations, proposal writing, and presenting solutions to clients. Technical Expertise (proficient in at least 3 areas below): -Data Engineering: Hands-on experience with Apache Spark, Kafka, Airflow, Hadoop, etc. -Web App Development: Building and deploying scalable web applications. -AI/ML: Developing and integrating AI/ML models into enterprise applications. -Cloud Platforms: AWS, Azure, or GCP — infrastructure, services, and architecture. -BI Tools: Tableau, Power BI, Qlik — creating dashboards and driving insights. Platform Specializations (one or more preferred): -Salesforce: Development, customization, and administration. -Anaplan: Modelling and planning for business use cases. -Veeva Suite: CRM, Vault, or Network experience in healthcare/life sciences. Pre-sales & Solutioning: Provide support to the business development team through technical discovery, demo sessions, and PoCs. Project Management: Strong project management skills with the ability to manage complex initiatives and deliver them on time and within budget Resource Management: Expertise in effectively allocating and managing resources across multiple projects, ensuring optimal utilization of both human and technical resources. Proficient in balancing workloads, tracking project progress, and adjusting resources to meet project goals and deadlines. Analytical Mindset: Strong problem-solving and analytical skills, with the ability to break down complex business requirements and translate them into technical solutions Communication: Excellent communication skills, both verbal and written, with the ability to present technical information to non-technical stakeholders Education: Bachelor’s degree in computer science, Information Technology, Business Administration, or a related field. Manage, Master, and Maximize with MResult MResult is an equal-opportunity employer committed to building an inclusive environment free of discrimination and harassment. Take the next step in your career with MResult — where your ideas help shape the future. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Azure Data Total IT Experience (in Yrs.)- 4 to 7 Location - Indore / Pune Relevant Experience Required (in Yrs.) 3+ years direct experience in analyzing and deriving source systems, data governance, metadata management, data architecture, data quality and metadata related output Strong experience in different type of “Data Analysis” covering business data, metadata, master data, analytical data. Language Requirement English Key words to search in resume Databricks, AZURE Data Factory Technical/Functional Skills -MUST HAVE SKILLS 3+ years hands-on experience with Databricks This role will be responsible for conducting assessment of the existing systems in the land scape Devise a strategy for SAS to Databricks migration activities Work out on a plan to perform the above said activities Work closely with customer on daily basis and present the progress made and the plan of action Interact with onsite and offshore cognizant associates to ensure that the project deliverables are on track Secondary Skills Data Management solutions with capabilities, such as Data Ingestion, Data Curation, Metadata and Catalog, Data Security, Data Modeling, Data Wrangling Responsibilities Hands on experience in installing, Configuring and using MS Azure Data bricks and Hadoop ecosystem components like DBFS, Parquet, Delta Tables, HDFS, Map Reduce programming, Kafka, Spark & Event Hub. In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib. Hands on experience in Scripting languages like Scala & Python. Hands on experience in Analysis, Design, Coding & Testing phases of SDLC with best practices. Expertise in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair. Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Senior Data Scientist Location: Chennai Duration: 12 Months Work Type: Onsite Position Description: We are seeking an experienced and highly analytical Senior Data Scientist with a strong statistical background to join our dynamic team. You will be instrumental in leveraging our rich datasets to uncover insights, build sophisticated predictive models, and create impactful visualizations that drive strategic decisions. Responsibilities: Lead the end-to-end lifecycle of data science projects, from defining the business problem and exploring data to developing, validating, deploying, and monitoring models in production. Apply advanced statistical methodologies and machine learning algorithms to analyze large, complex datasets (structured and unstructured) and extract meaningful patterns and insights. Develop and implement robust, scalable, and automated processes for data analysis and model pipelines, leveraging cloud infrastructure. Collaborate closely with business stakeholders and cross-functional teams to understand their analytical needs, translate them into technical requirements, and effectively communicate findings. Create compelling and interactive dashboards and data visualizations to clearly present complex results and insights to both technical and non-technical audiences. Stay up to date with the latest advancements in statistics, machine learning, and cloud technologies, and advocate for the adoption of best practices. Skills Required: Statistics, Machine Learning, Data Science, Problem Solving, Analytical, Communications Skills Preferred: GCP, Google Cloud Platform, Mechanical Engineering, Cost Analysis Experience Required: 5+ years of progressive professional experience in a Data Scientist, Machine Learning Engineer, or similar quantitative role, with a track record of successfully delivering data science projects. Bachelor's or Master's degree in Statistics. A strong foundation in statistical theory and application is essential for this role. (We might consider highly related quantitative fields like Applied Statistics, Econometrics, or Mathematical Statistics if they have a demonstrably strong statistical core, but Statistics is our primary focus). Proven hands-on experience applying a variety of machine learning techniques (e.g., regression, classification, clustering, tree-based models, potentially deep learning) to real-world business problems. Must have strong proficiency in Python and its data science ecosystem (e.g., Pandas, NumPy, Scikit-learn, potentially TensorFlow or PyTorch). Hands-on experience working with cloud computing platforms (e.g., AWS, Azure, GCP) for data storage, processing, and deploying analytical solutions. Extensive experience creating data visualizations and dashboards to effectively communicate insights. You know how to tell a story with data! Solid understanding of experimental design, hypothesis testing, and statistical inference. Excellent problem-solving skills, attention to detail, and the ability to work with complex data structures. Strong communication, presentation, and interpersonal skills, with the ability to explain technical concepts clearly to diverse audiences. Experience Preferred: Experience working within the Automotive industry or with related data such as vehicle telematics, manufacturing quality, supply chain, or customer behavior in an automotive context. Experience with GCP services such as GCP Big query, GCS, Cloud Run, Cloud Build, Cloud Source Repositories, Cloud Workflows Proficiency with specific dashboarding and visualization tools such as Looker Studio, PowerBI, Qlik, or Tableau. Experience with SQL for data querying and manipulation. Familiarity with big data technologies (e.g., Spark, Hadoop). Experience with MLOps practices and tools for deploying and managing models in production. Advanced degree (PhD) in Statistics or a related quantitative field. Education Required: Bachelor's Degree Education Preferred: Master's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Responsibilities : 1. Ensure the stable operation of all kinds of databases in all related business lines of the company, including but not limited to MySQL, Redis, Hadoop, ES, etc; 2. Responsible for the architecture design of database clusters, work with business departments to achieve database availability assurance, vertical and horizontal scaling solutions and implementation; 3. Able to do optimization and adjustment with the business, to achieve low-cost, high-performance database services; 4. Develop and design database operation and maintenance automation products and related components; 5. Maintain technical sensitivity, continue to do research on new database technology. Requirement 1. Installing various versions of MySQL( like 5.x/8.x )/MariaDB on Linux , Windows, and other operating systems in both on-premise and on Cloud like AWS, Azure, etc., 2. Managing Cloud(AWS/Azure) Managed MySQL/MariaDB. 3. Areas of experience include Installation, Creation, and Maintenance of Databases. 4. Taking online and offline backups, Managing daily, weekly backups. 5. Responsible for weekly DB maintenance, and production database release tasks. 6. Worked with storage engines MyISAM, InnoDB, Memory, etc. 7. Installing MySQL/MariaDB with source and binary distribution. 8. Perform security patching, bug fixes, and minor upgrades on production, pre-prod, and development database instances 9. Troubleshooting major incidents, connectivity issues, performance, and database slowness issues for database instances. 10. Monitoring performance and availability of databases during business hours and on-call hours. 11. Automating daily/recurring tasks such as user creation, log purging, and daily backups. 12. Adhering to security norms and providing database users with privileges as per decided standards. Database user role creation 13. MySQL/MariaDB replication setup Why join us Because you get an opportunity to make a difference, and have a great time doing that. You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve. You should work with us if you think seriously about what technology can do for people. We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Compensation: If you are the right fit, we believe in creating wealth for you. With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story! Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Role: We are looking for an enthusiastic Staff Data Scientist to join our growing team. The hire will be responsible for working in collaboration with other data scientists and engineers across the organization to develop production-quality models for a variety of problems across Razorpay. Some possible problems include : making recommendations to merchants from Razorpay’s suite of products, cost optimization of transactions for merchants, automatic address disambiguation / correction to enable tracking customer purchases using advanced natural language processing techniques. As part of the DS team @ Razorpay, you’ll work with some of the smartest engineers/architects/data scientists in the industry and have the opportunity to solve complex and critical problems for Razorpay. Responsibilities: Lead the data science team, providing guidance, mentorship, and technical expertise to drive impactful outcomes. Apply advanced data science methodologies, mathematics, and machine learning techniques to solve intricate and strategic business problems. Collaborate closely with cross-functional teams, including engineers, product managers, and business stakeholders, to develop and deploy robust data science solutions. Conduct in-depth analysis of large and complex datasets to extract valuable insights and drive actionable recommendations. Present findings, insights, and strategic recommendations to senior stakeholders in a clear and concise manner. Identify key metrics and develop executive-level dashboards to monitor performance and support data-driven decision-making. Oversee multiple projects concurrently, ensuring high-quality deliverables within defined timelines. Train and mentor junior data scientists, fostering their professional growth and fostering a collaborative and innovative team environment. Continuously improve and optimize data science solutions, evaluating their effectiveness and exploring new methodologies and technologies. Drive the deployment of data-driven solutions into production, ensuring seamless integration and effective communication of results. Mandatory Qualifications: 8+ years experience in a data science role, with a track record of delivering impactful solutions in a production environment. Advanced degree (Master's or Ph.D.) in a quantitative field, such as Computer Science, Statistics, Mathematics, or related disciplines. Deep knowledge and expertise in advanced machine learning techniques, statistical analysis, and mathematical modeling. Proficiency in programming languages such as Python, R, or Scala, with experience in building scalable and efficient data science workflows. Deep experience with big data processing frameworks (e.g., Hadoop, Spark) and deep learning frameworks (e.g., TensorFlow, PyTorch). Strong leadership skills, with the ability to guide and inspire a team, prioritize tasks, and meet project goals. Excellent communication and presentation skills, with the ability to convey complex concepts to both technical and non-technical stakeholders. Proven experience in driving data-driven decision-making processes and influencing strategic initiatives. Deep experience with cloud platforms (e.g., AWS, Azure, GCP) and their data science tools and services. A passion for staying up-to-date with the latest advancements in data science and actively exploring new techniques and technologies. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are looking for 8+years experienced candidates for this role. Job Location: Technopark, Trivandrum Experience 8+ years of experience in Microsoft SQL Server administration Primary skills Strong experience in Microsoft SQL Server administration Qualifications Bachelor's degree in computer science, software engineering or a related field Microsoft SQL certifications (MTA Database, MCSA: SQL Server, MCSE: Data Management and Analytics) will be an advantage. Secondary Skills Experience in MySQL, PostgreSQL, and Oracle database administration. Exposure to Data Lake, Hadoop, and Azure technologies Exposure to DevOps or ITIL Main duties/responsibilities Optimize database queries to ensure fast and efficient data retrieval, particularly for complex or high-volume operations. Design and implement effective indexing strategies to reduce query execution times and improve overall database performance. Monitor and profile slow or inefficient queries and recommend best practices for rewriting or re-architecting queries. Continuously analyze execution plans for SQL queries to identify bottlenecks and optimize them. Database Maintenance: Schedule and execute regular maintenance tasks, including backups, consistency checks, and index rebuilding. Health Monitoring: Implement automated monitoring systems to track database performance, availability, and critical parameters such as CPU usage, memory, disk I/O, and replication status. Proactive Issue Resolution: Diagnose and resolve database issues (e.g., locking, deadlocks, data corruption) proactively, before they impact users or operations. High Availability: Implement and manage database clustering, replication, and failover strategies to ensure high availability and disaster recovery (e.g., using tools like SQL Server Always On, Oracle RAC, MySQL Group Replication). Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 11-Jun-2025 About the role Refer to Responsibilities What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Job Summary: Margin Discovery team offers Income/profit recovery services to Tesco PLC. This role is responsible for auditing Tesco Promotions & Agreement data (Commercial Income), claiming and recovering money owed to Tesco PLC Every year we recover over millions of pounds for Tesco and also work closely with Product team and Suppliers, sharing our audit findings to minimize future losses. Our dedicated and highly experienced audit team utilize progressive & dynamic financial service In this job, solutions & I’m acco industry untable leading for: technology to achieve maximum success. Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: Audit Tesco's Promotion, Sales, Defectives and commercial agreements to identify potential revenue leakages Following our Business Code of Conduct and always acting with integrity and due diligence Responsible for completing tasks and transactions within agreed critical metrics Understanding of business processes gaps that can lead to financial irregularities Experience of engaging with stakeholders and presentation of key issue, opportunities, status update Identify root cause of audit findings & collaborate with internal stakeholders to make process changes that reduces/eliminates revenue leakage Understanding of accounting principles Identifying operational improvements and finding solutions by applying CI tools and techniques Ensure timely and accurate resolution of disputes & questions raised by vendors on audit findings Partner across other teams to learn new methods to interpret data as well as develop new ways of analyzing large volumes of data Ensure compliance with GSCOP and GCA Guidelines Use critical thinking and analytical skills with a keen eye for detail to enhance missed income audit findings Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: Finance Team Commercial Teams and Product Transformation team Suppliers Operational skills relevant for this job: Experience relevant for this job: Strong computer literacy - able to use Microsoft Excel, Word & Fresher’s may also apply - graduate of a Finance/Accounting PowerPoint competently. (or related) Bachelor’s degree. Logical reasoning Experience in accounting, finance, accounts payable, buying, Basic SQL & Hadoop or audit a plus Basic visualization and interpretation Ability to work well in an individual and team environment Highly proficient in spoken and written English Retail Acumen You will need Refer to Responsibilities About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud service offering spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world’s biggest challenges. About The Network Monitoring (NM) Team Networking is a mission critical part of the OCI cloud. Our customers want higher availability, more visibility, greater network security, better network performance and throughput, better capacity planning, root cause analysis, and prediction of failures. We help Oracle Cloud Infrastructure (OCI) build the best-in-class cloud monitoring solutions to provide performance monitoring, what-if analysis, AI enabled root cause analysis, prediction, and capacity planning for Oracle’s global cloud network infrastructure. Our mission is to build monitoring services that comprehensively view, analyze, plan, and optimize to scale and operate our networks. Responsibilities We are looking for a Senior Member of Technical Staff for the OCI Network Monitoring team who has the expertise and passion in solving difficult problems in globally distributed systems, building cloud native observability and analytics solutions at scale using innovative AI/ML solutions. You should be comfortable at building complex distributed AI/ML systems that excel at huge amount of data handling, involving collecting metrics, building data pipelines, and generating analytics using AI/ML for real-time processing and batch processing. If you are passionate about designing, developing, testing, and delivering AI/ML based observability services, are excited to learn and thrive in a fast-paced environment, the NM team is the place for you. Required Qualifications/Desired Qualifications: (Adjust as per focus area) BS/MS or Equivalent in CS or equivalent relevant area 7+ years of experience in software development 1-2 years of experience in developing AI/ML applications using ML models Proficiency with Java/Python/C++ and Object-Oriented programming Networking protocol knowledge such as TCP/IP/Ethernet/BGP/OSPF Networking Management Technologies such as SNMP, Netflow, BGP Monitoring Protocol, gNMI Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Knowledge of cloud computing & networking technologies including monitoring services Operational experience running and troubleshooting large networks Experience developing service-oriented systems Exposure to Hadoop, Spark, Kafka, Storm, open TSDB, Elastic Search or other distributed compute platforms Exposure to LLM frameworks such as Langchain and LlamaIndex Exposure to LLMs such as GPT-4, Llama 3.1, Cohere Command Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. 16 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration, Ansible on Microsoft Azure Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. 16 years full time education Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud service offering spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world’s biggest challenges. About The Network Monitoring (NM) Team Networking is a mission critical part of the OCI cloud. Our customers want higher availability, more visibility, greater network security, better network performance and throughput, better capacity planning, root cause analysis, and prediction of failures. We help Oracle Cloud Infrastructure (OCI) build the best-in-class cloud monitoring solutions to provide performance monitoring, what-if analysis, AI enabled root cause analysis, prediction, and capacity planning for Oracle’s global cloud network infrastructure. Our mission is to build monitoring services that comprehensively view, analyze, plan, and optimize to scale and operate our networks. Responsibilities We are looking for a Senior Member of Technical Staff for the OCI Network Monitoring team who has the expertise and passion in solving difficult problems in globally distributed systems, building cloud native observability and analytics solutions at scale using innovative AI/ML solutions. You should be comfortable at building complex distributed AI/ML systems that excel at huge amount of data handling, involving collecting metrics, building data pipelines, and generating analytics using AI/ML for real-time processing and batch processing. If you are passionate about designing, developing, testing, and delivering AI/ML based observability services, are excited to learn and thrive in a fast-paced environment, the NM team is the place for you. Required Qualifications/Desired Qualifications: (Adjust as per focus area) BS/MS or Equivalent in CS or equivalent relevant area 7+ years of experience in software development 1-2 years of experience in developing AI/ML applications using ML models Proficiency with Java/Python/C++ and Object-Oriented programming Networking protocol knowledge such as TCP/IP/Ethernet/BGP/OSPF Networking Management Technologies such as SNMP, Netflow, BGP Monitoring Protocol, gNMI Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Knowledge of cloud computing & networking technologies including monitoring services Operational experience running and troubleshooting large networks Experience developing service-oriented systems Exposure to Hadoop, Spark, Kafka, Storm, open TSDB, Elastic Search or other distributed compute platforms Exposure to LLM frameworks such as Langchain and LlamaIndex Exposure to LLMs such as GPT-4, Llama 3.1, Cohere Command Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud service offering spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). OCI is committed to providing the best in cloud products that meet the needs of our customers who are tackling some of the world’s biggest challenges. About The Network Monitoring (NM) Team Networking is a mission critical part of the OCI cloud. Our customers want higher availability, more visibility, greater network security, better network performance and throughput, better capacity planning, root cause analysis, and prediction of failures. We help Oracle Cloud Infrastructure (OCI) build the best-in-class cloud monitoring solutions to provide performance monitoring, what-if analysis, AI enabled root cause analysis, prediction, and capacity planning for Oracle’s global cloud network infrastructure. Our mission is to build monitoring services that comprehensively view, analyze, plan, and optimize to scale and operate our networks. Responsibilities We are looking for a Senior Member of Technical Staff for the OCI Network Monitoring team who has the expertise and passion in solving difficult problems in globally distributed systems, building cloud native observability and analytics solutions at scale using innovative AI/ML solutions. You should be comfortable at building complex distributed AI/ML systems that excel at huge amount of data handling, involving collecting metrics, building data pipelines, and generating analytics using AI/ML for real-time processing and batch processing. If you are passionate about designing, developing, testing, and delivering AI/ML based observability services, are excited to learn and thrive in a fast-paced environment, the NM team is the place for you. Required Qualifications/Desired Qualifications: (Adjust as per focus area) BS/MS or Equivalent in CS or equivalent relevant area 7+ years of experience in software development 1-2 years of experience in developing AI/ML applications using ML models Proficiency with Java/Python/C++ and Object-Oriented programming Networking protocol knowledge such as TCP/IP/Ethernet/BGP/OSPF Networking Management Technologies such as SNMP, Netflow, BGP Monitoring Protocol, gNMI Excellent knowledge of data structures, search/sort algorithms Excellent organizational, verbal, and written communication skills Knowledge of cloud computing & networking technologies including monitoring services Operational experience running and troubleshooting large networks Experience developing service-oriented systems Exposure to Hadoop, Spark, Kafka, Storm, open TSDB, Elastic Search or other distributed compute platforms Exposure to LLM frameworks such as Langchain and LlamaIndex Exposure to LLMs such as GPT-4, Llama 3.1, Cohere Command Experience with Jira, Confluence, BitBucket Knowledge of Scrum & Agile Methodologies Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.
These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.
The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.
In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.
In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.
As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.