Home
Jobs

3683 Hadoop Jobs - Page 37

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Key Responsibilities Lead the deployment, configuration, and ongoing administration of Hortonworks, Cloudera, and Apache Hadoop/Spark ecosystems. Maintain and monitor core components of the Hadoop ecosystem including Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, and HBASE. Take charge of the day-to-day running of Hadoop clusters using tools like Ambari, Cloudera Manager, or other monitoring tools, ensuring continuous availability and optimal performance. Manage and provide expertise in HBASE Clusters and SOLR Clusters, including capacity planning and performance tuning. Perform installation, configuration, and troubleshooting of Linux Operating Systems and network components relevant to big data environments. Develop and implement automation scripts using Unix SHELL/Ansible Scripting to streamline operational tasks and improve efficiency. Manage and maintain KVM Virtualization environments. Oversee clusters, storage solutions, backup strategies, and disaster recovery plans for big data infrastructure. Implement and manage comprehensive monitoring tools to proactively identify and address system anomalies and performance bottlenecks. Work closely with database teams, network teams, and application teams to ensure high availability and expected performance of all big data applications. Interact directly with customers at their premises to provide technical support and resolve issues related to System and Hadoop administration. Coordinate closely with internal QA and Engineering teams to facilitate issue resolution within promised Skills & Qualifications : Experience : 5-8 years of strong individual contributor experience as a DevOps, System, and/or Hadoop Domain Expertise : Proficient in Linux Administration. Extensive experience with Hadoop Infrastructure and Administration. Strong knowledge and experience with SOLR. Proficiency in Configuration Management tools, particularly Data Ecosystem Components : Must have hands-on experience and strong knowledge of managing and maintaining : Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem deployments. Core components like Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE. Cluster management tools such as Ambari and Cloudera : Strong scripting skills in one or more of Perl, Python, or Management : Strong experience working with clusters, storage solutions, backup strategies, database management systems, monitoring tools, and disaster recovery : Experience managing KVM Virtualization : Excellent analytical and problem-solving skills, with a methodical approach to debugging complex : Strong communication skills (verbal and written) with the ability to interact effectively with technical teams and : Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent relevant work experience. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Linkedin logo

Salary - 10 to 25 LPA Title : Sr. Data Scientist/ML Engineer (4+ years & above) Required Technical Skillset Language : Python, PySpark Framework : Scikit-learn, TensorFlow, Keras, PyTorch Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills : Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques : Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis and Visualization : Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks : Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies : A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering : A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication and Collaboration : A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Job Title : Java Developer Big Data Experience : 6+ Location : Bangalore | Chennai Notice Period : Immediate to 4 Weeks Job Summary We are seeking a Java Developer with 2+ years of hands-on Big Data experience to join our high-performing data engineering team. The ideal candidate should have strong Java development skills along with practical exposure to Big Data technologies and data processing frameworks. You will play a key role in building scalable, data-driven applications. Key Responsibilities Design, develop, and maintain Java-based applications that interact with Big Data ecosystems. Build scalable data pipelines using Big Data technologies (Hadoop, Spark, Hive, Kafka, etc. Collaborate with data engineers, analysts, and architects to implement data solutions. Optimize and tune data processes for performance and scalability. Develop REST APIs for data access and processing. Ensure data quality, security, and reliability across pipelines and services. Required Skills 612 years of overall experience in Java/J2EE application development. 2+ years of hands-on experience with Big Data tools like Hadoop, Spark, Hive, HBase, or Kafka. Strong experience with Spring Boot, REST APIs, and Microservices architecture. Proficient in SQL, data structures, and algorithms. Familiarity with distributed systems, batch and stream processing. Experience working with data lakes, data warehouses, or large-scale data platforms. Nice To Have Experience with Apache Airflow, Flink, or Presto. Knowledge of NoSQL databases like Cassandra or MongoDB. Exposure to cloud platforms and Big Data services. Familiarity with containerization tools like Docker, Kubernetes (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Us Lemma Technologies is a software start-up company based in Baner Pune. We are unleashing the power of programmatic AdTech to the DOOH ( Digital out of home ) world. Our Mission is to Transform Digital Out Of Home media to connect Brands with their Consumer by Establishing Authentic and Transparent Standards. Innovation is our DNA and Transparency is our RNA We are Revolutionising the DOOH industry. As an organisation, we successfully deliver brand stories seamlessly across all large format digital screens from DOOH to CTV and even on mobile and desktop devices. We are focussed on connecting DOOH media to mainstream digital, enabling brands to deploy omni-digital strategies through our platform. Roles & Responsibilities Chief Data Scientist /Architect of Lemma Technologies. This role will be responsible to define and execute the technical strategy for adoption of modern AI / ML practices to acquire, process data and provide actional insights to Lemma customers. Good understanding of the entire journey of Data acquisition, Data warehouse, Information Architecture, Dashboard, Reports, Predictive Insights, Adoption of AI / ML and NLP and provide innovative data oriented insights for Lemma customers Deep understanding of Data science and Technology and can recommend adoption of right technical tools and strategies. Expected to be hands on technical expert who will build and guide a technical data team Build, design and implement our highly scalable, fault-tolerant, highly available big data platform to process terabytes of data and provid customers with in-depth analytics. Deep data science and AI/ML hands-on experience to give actionable insights to advertisers/ customers of Lemma Good overview of modern technology stack such as Spark, Hadoop, Kafka, HBase, Hive, Presto etc. Automate high-volume data collection and processing to provide real time data analytics. Customize Lemmas reporting and analytics platform based on customers requirements from customers and deliver scalable, production-ready solutions. Lead multiple projects to develop features for data processing and reporting platform, collaborate with product managers, cross-functional teams, other stakeholders and ensure successful delivery of projects. Leveraging a broad range of Lemmas data architecture strategies and proposing both data flows and storage solutions. Managing Hadoop map reduce and spark jobs & solving any ongoing issues with operating the cluster. Working closely with cross functional teams on improving availability and scalability of large data platform and functionality of Lemma software. Participate in Agile/Scrum processes such as sprint planning, sprint retrospective, backlog grooming, user story management, work item prioritization, etc.. Skills Required 10 to 12+ years of proven experience in designing, implementing, and delivering complex, scalable, and resilient platform and services. Experience in building AI, machine learning, Data Analytics Experience in OLAP (Snowflake, Vertica or similar) would be an added advantage. Ability to understand vague business problems and convert into working solutions. Excellent spoken and written interpersonal skills with a collaborative approach. Dedication to developing high-quality software and products. Curiosity to explore and understand data is a strong plus Deep understanding of Big-Data and distributed systems (MapReduce, Spark, Hive, Kafka, Oozie, Airflow) (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

We are looking for a highly skilled Big Data Engineer with expertise in cloud technologies to join our team. The ideal candidate will be responsible for designing, developing, and maintaining scalable big data solutions, ensuring efficient data processing, storage, and analytics. This role involves working with distributed systems, cloud platforms, and modern data frameworks to support real-time and batch data pipelines. The above-mentioned skillsets or roles are used for creating content and labs. Responsibilities Design, implement, and manage scalable big data architectures on AWS, Azure, or GCP. Develop ETL pipelines for ingesting, processing, and transforming large datasets. Work with Python, Apache Spark, Hadoop, and Kafka to build efficient data processing solutions. Implement data lakes, data warehouses, and streaming architectures. Optimize database and query performance for large-scale datasets. Collaborate with SMEs, Clients, and software engineers to deliver content. Ensure data security, governance, and compliance with industry standards. Automate workflows using Apache Airflow or other orchestration tools. Monitor and troubleshoot data pipelines to ensure reliability and scalability. Requirements Minimum educational qualifications: B. E., B. Sc, M. Sc, MCA Proficiency in Python, Java, or Scala for data processing. Hands-on experience with Apache Spark, Hadoop, Kafka, Flink, Storm. Hands-on experience working with SQL and NoSQL databases. Strong expertise in cloud-based data solutions (AWS / Google / Azure). Hands-on experience in building and managing ETL/ELT pipelines. Knowledge of containerization and orchestration, Docker or K8S. Hands-on with real-time data streaming and serverless data processing. Familiarity with machine learning pipelines and AI-driven analytics. Strong understanding of CI/CD & ETL pipelines for data workflows. Technical Skills Big Data Technologies: Apache Spark, Hadoop, Kafka, Flink, Storm. Cloud Platforms: AWS / Google / Azure. Programming Languages: Python, Java, Scala, SQL, PySpark. Data Storage and Processing: Data Lakes, Warehouses, ETL/ELT Pipelines. Orchestration: Apache Airflow, Prefect, Dagster. Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra). Security and Compliance: IAM, Data Governance, Encryption. DevOps Tools: Docker, Kubernetes, Terraform, CI/CD Pipelines. Soft Skills Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in an agile, fast-paced environment. Attention to detail and data accuracy. Self-motivated and proactive. Certifications Any Cloud or Data-related certifications. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Databricks Dashboard Engineer Job Summary We are looking for a versatile Databricks Dashboard Engineer with strong coding skills in SQL who can design and build interactive dashboards as well as contribute to data engineering efforts. Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities Design, develop, and maintain interactive dashboards and visualizations using Databricks SQL, Delta Lake, and Notebooks. Collaborate with business stakeholders to gather dashboard requirements and deliver actionable insights. Optimize data models and queries for performance and scalability. Integrate Databricks data with BI tools such as Power BI, Tableau, or Looker. Automate dashboard refreshes and monitor data quality. Maintain comprehensive documentation for dashboards. Work closely with data engineers and analysts to ensure data governance and reliability. Stay current with Databricks platform capabilities and dashboarding best practices Design, develop, test, and deploy data model and dashboard processes (batch or real-time) using tools such as Databricks, PowerBI etc. Create functional & technical documentation – e.g. data model architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data model, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data modelling and dashboarding Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: Databricks, SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications 8 Years industry implementation experience with data warehousing tools such as AWS services Redshift, Synapse, Databricks, Power BI, Tableau, Qlik, Looker etc. 3+ years of experience in databricks dashboard development 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Proficient in SQL, data modeling, and query optimization Experience with Databricks SQL, Delta Lake, and notebook development. Familiarity with BI visualization tools like Power BI, Tableau, or Looker. Understanding of data warehousing, ETL/ELT pipelines, and cloud data platforms Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Delhi, India

Remote

Linkedin logo

About Markovate At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI consulting and Gen AI development to pioneering AI agents and agentic AI, we empower our partners to lead their industries with forward-thinking precision and unmatched expertise. Job Summary. AI Solution Architect will lead the design and implementation of AI-powered solutions for clients, ensuring that they align with business goals and provide scalable, high-performance results. Responsibilities AI Solution Architect will work closely with clients, development teams, and stakeholders to design and deploy cutting-edge AI solutions in various domains, including machine learning, natural language processing and computer Responsibilities : Understand the values and vision of the organization. Protect the Intellectual Property. Adhere to all the policies and procedures. Lead the design, development, and implementation of AI-powered solutions, ensuring alignment with client requirements and business objectives. Lead the AI solution lifecycle, from concept to deployment, ensuring that milestones and deadlines are met. Collaborate with clients, product managers, and technical teams to understand business challenges and translate them into AI-driven solutions. Design end-to-end AI architectures, including data pipelines, model selection, deployment strategies, and performance monitoring. Oversee the integration of AI models into production environments, ensuring that they deliver consistent, high-quality results. Guide the adoption of best practices in AI, machine learning, and data science, ensuring that solutions are scalable, efficient, and cost-effective. Evaluate and select appropriate AI tools, frameworks, and technologies for specific project requirements. Provide technical leadership and mentorship to junior AI engineers and data scientists. Ensure that AI models are tested thoroughly, validated, and optimized for performance, accuracy, and scalability. Work closely with cross-functional teams, including data engineers, business analysts, and UX/UI designers, to deliver end-to-end AI solutions. Stay current with the latest advancements in AI technologies and contribute to the ongoing development of AI solutions and frameworks within the organization. Communicate the technical aspects of AI solutions clearly to both technical and non-technical Required Skills : Expertise in designing AI architectures and solutions for complex business problems. Deep understanding of machine learning, deep learning, NLP, and computer vision algorithms and their application to real-world problems. Strong experience in working with data pipelines, data storage, and processing technologies. Ability to lead and guide a team of AI engineers and data scientists to build scalable AI solutions. Experience with performance optimization, model evaluation, and continuous improvement of AI systems. Extensive experience with AI frameworks and tools such as TensorFlow, PyTorch, Keras, Scikitlearn and others. Demonstrate excellent Leadership abilities in managing, mentoring and guiding team. Excellent communication and presentation skills, with the ability to explain complex AI concepts to non-technical stakeholders. Collaborative approach to effectively present and advocate for quick design solutions. Ability to work effectively with cross-functional teams and deliver results in a collaborative environment. Stay updated on the latest design trends, tools, and technologies, bringing innovative ideas to enhance the product experience. A proactive approach to problem solving, with a focus on delivering exceptional customer satisfaction with ability to navigate complex technical challenges and design innovative Skills : Excellent understanding of AI concepts, including supervised and unsupervised learning, reinforcement learning, model training, evaluation, and optimization. Experience with Big Data technologies such as Hadoop, Spark, and Kafka, and working with large datasets. Strong programming skills in Python, Java, or similar languages, with hands-on experience in machine learning libraries. Proficiency in cloud-based AI deployment and orchestration tools. Ability to stay ahead of industry trends and continuously evaluate new tools, frameworks, and techniques. Strong knowledge of cloud platforms such as AWS, Azure, and Google Cloud, with expertise in deploying AI models at scale. Familiarity with DevOps practices for AI, including continuous integration and deployment pipelines for machine learning : Demonstrate proactive thinking. Should have strong interpersonal relations, expert business acumen and mentoring skills. Strong problem-solving skills with attention to detail. Have the ability to work under stringent deadlines and demanding client Relevant Information : Bachelors degree in Computer Science, Information Technology, or a related field. Over 10+ years of experience in software development, with at least 8 years in AI/ML solution architecture and design. Proven experience in designing and implementing AI solutions in production environments, including machine learning, NLP, computer vision, and deep learning. This role offers the flexibility of working remotely in India. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Job Description We are seeking an AI Solution Architect to lead the design and implementation of AI-driven solutions that align with business objectives and deliver scalable, high-performance results. This role requires deep expertise in AI/ML, solution architecture, and cloud deployment while collaborating with clients, developers, and stakeholders to drive AI innovation. Required Skills 12+ years in software development, with 8+ years in AI/ML solution architecture. Expertise in AI/ML frameworks (TensorFlow, PyTorch, Keras, Scikit-learn). Strong knowledge of cloud platforms (AWS, Azure, GCP) and AI model deployment at scale. Experience with data pipelines, big data technologies (Hadoop, Spark, Kafka), and cloud orchestration tools. Strong programming skills in Python, with hands-on experience in ML model training, evaluation, and optimization. Familiarity with DevOps for AI (CI/CD pipelines, MLOps best practices). Strong leadership, communication, and problem-solving skills, with a proactive and collaborative mindset. Key Responsibilities Architect and implement AI-powered solutions in machine learning, NLP, and computer vision to solve complex business challenges. Lead the AI solution lifecycle from concept to deployment, ensuring scalability, performance, and efficiency. Design end-to-end AI architectures, including data pipelines, model selection, deployment, and monitoring strategies. Integrate AI models into production environments, optimizing for performance, accuracy, and business impact. Evaluate and recommend AI tools, frameworks, and technologies for various projects. Mentor and guide AI engineers and data scientists, fostering best practices in AI development. Collaborate with cross-functional teams (data engineers, business analysts, UI/UX designers) to deliver AI-driven products. Ensure compliance with AI ethics, security, and governance while staying updated with the latest AI advancements. Communicate technical solutions effectively to both technical and non-technical stakeholders. Preferred Qualifications Experience with supervised/unsupervised learning, reinforcement learning, and deep learning. Understanding of international AI compliance, security, and governance standards. Ability to navigate complex technical challenges and drive AI innovation in real-world applications. Bachelor's/Master's degree in Computer Science, AI, or related fields. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 - 6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Job Summary We are seeking a highly skilled and experienced Senior Data Scientist with a strong focus on Generative AI and Natural Language Processing (NLP) to join our growing team. You will play a crucial role in developing and implementing cutting-edge AI/ML solutions, leveraging your expertise in generative models, NLP, and deep learning to solve complex business challenges. This role requires a deep understanding of advanced statistical and machine learning techniques, excellent communication skills, and the ability to collaborate effectively with product development and leadership teams. Experience in a startup environment is a plus. Key Responsibilities Data Acquisition & Automation : Identify relevant data sources and build automated data collection processes. Data Preprocessing : Preprocess structured and unstructured data to prepare it for model training and analysis. Model Development : Develop and innovate machine learning and deep learning algorithms, including generative AI models, NLP models, forecasting models, and network graphs. Model Building : Build predictive models, machine learning algorithms, and data pipelines for end-to-end solutions. Problem Solving : Propose data-driven solutions and strategies to address business challenges. Collaboration & Communication : Collaborate effectively with product development teams and communicate technical information clearly to senior leadership teams. Problem-Solving Sessions : Participate actively in problem-solving sessions, contributing your expertise and insights. Required Technical Skills Generative AI : Extensive background in generative AI algorithms and deep understanding of generative models. NLP : Deep understanding of NLP algorithms, techniques, and applications. Deep Learning : Strong knowledge of deep learning architectures and frameworks. Machine Learning : Solid foundation in machine learning algorithms, including supervised, unsupervised, and reinforcement learning. Statistical Modeling : Strong mathematical and statistical skills, including algebra and statistical modeling. Programming Languages : Fluency in at least one data science/analytics programming language (Python, R, Julia). Python is highly preferred. Data Pipeline Development : Experience building data pipelines for end-to-end solutions. Preferred Skills Cloud Computing : Familiarity with cloud platforms (AWS, Azure, GCP) and their AI/ML services. Data Visualization : Experience with data visualization tools (i., Tableau, Power BI). Big Data Technologies : Knowledge of big data technologies (i., Spark, Hadoop). Required Experience & Qualifications Education : Bachelor's degree in a highly quantitative field (Computer Science, Engineering, Physics, Math, Operations Research, or equivalent experience). Experience 5-6years of advanced analytics experience, preferably in startups or marquee companies. Startup experience is a plus. Personal Attributes Strong problem-solving and analytical skills. Excellent communication and presentation skills, with the ability to explain complex technical concepts clearly. Ability to work independently and as part of a team. Proactive, self-motivated, and results-oriented. Passion for AI/ML and its potential to solve real-world problems (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title : Data : Bangalore : 3+ : the Opportunity : We Are Urgently Looking For Experienced Data Engineers To Join Our Team At Hexamobile, Bangalore. Ideal Candidates Will Have a Strong Background In Python, PySpark, And ETL Processes, With Azure Cloud Experience Being a Strong Design, develop, and maintain scalable and efficient data pipelines using Python and PySpark. Build and optimize ETL (Extract, Transform, Load) processes to ingest, clean, transform, and load data from various sources into data warehouses and data lakes. Work with large and complex datasets, ensuring data quality, integrity, and reliability. Collaborate closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with clean and well-structured data. Monitor and troubleshoot data pipelines, identifying and resolving issues to ensure continuous data flow. Implement data quality checks and validation processes to maintain high data accuracy. Develop and maintain comprehensive documentation for data pipelines, ETL processes, and data models. Optimize data systems and pipelines for performance, scalability, and cost-efficiency. Implement data security and governance policies and procedures. Stay up-to-date with the latest advancements in data engineering technologies and best practices. Work in an agile environment, participating in sprint planning, daily stand-ups, and code reviews. Contribute to the design and architecture of our data Skills : Python : Strong proficiency in Python programming, including experience with data manipulation libraries (e.g., Pandas, NumPy). PySpark : Extensive hands-on experience with Apache Spark using PySpark for large-scale data processing and distributed computing. ETL Processes : Deep understanding of ETL concepts, methodologies, and best practices. Proven experience in designing, developing, and implementing ETL pipelines. SQ L: Solid understanding of SQL and experience in querying, manipulating, and transforming data in relational databases. Understanding of Databases : Strong understanding of various database systems, including relational databases (e.g., PostgreSQL, MySQL, SQL Server) and potentially NoSQL databases. Version Control : Experience with version control systems, particularly Git, and platforms like GitHub or GitLab (i.e., working with branches and pull Preferred Skills : Azure Cloud Experience: Hands-on experience with Microsoft Azure cloud services, particularly data-related services such as : Azure Data Factory Azure Databricks Azure Blob Storage Azure SQL Database Azure Data Lake Storage Experience with data warehousing concepts and : Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 3 years of professional experience as a Data Engineer. Proven experience in building and maintaining data pipelines using Python and PySpark. Strong analytical and problem-solving skills. Good verbal and written communication skills. Ability to work effectively both independently and as part of a team. Must be available to join Points : Experience with other big data technologies (Hadoop, Hive, Kafka, Apache Airflow). Knowledge of data governance and data quality frameworks. Experience with CI/CD pipelines for data engineering workflows. Familiarity with data visualization tools (Power BI, Tableau). Experience with other cloud platforms (AWS, GCP). (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Key Responsibilities : Design and develop interactive dashboards, reports, and visualizations using Power BI to drive critical business insights. Write complex SQL queries, stored procedures, and functions to effectively extract, transform, and load (ETL) data from various sources. Optimize and maintain SQL databases, ensuring data integrity, performance, and reliability. Develop robust data models and implement sophisticated DAX calculations in Power BI for advanced analytics. Integrate Power BI with diverse data sources, including various databases, cloud storage solutions, and APIs. Work closely with business stakeholders to meticulously gather requirements and translate them into actionable Business Intelligence solutions. Troubleshoot performance issues related to Power BI dashboards and SQL queries, ensuring optimal system performance. Stay updated with the latest trends and advancements in Power BI, SQL, and the broader field of data analytics. All About You Hands-on experience managing technology projects with demonstrated ability to understand complex data and technology initiatives Ability to lead and influence others to advance deliverables Understanding of emerging technologies including but not limited to, cloud architecture, machine learning/AI and Big Data infrastructure Data architecture experience and experience in building data models. Experience deploying and working with big data technologies like Hadoop, Spark, and Sqoop. Experience with streaming frameworks like Kafka and Axon and pipelines like Nifi, Proficient in OO programming (Python Java/Springboot/J2EE, and Scala) Experience with the Hadoop Ecosystem (HDFS, Yarn, MapReduce, Spark, Hive, Impala), Experience with Linux, Unix command line, Unix Shell Scripting, SQL and any Scripting language Experience with data visualization tools such as Tableau, Domo, and/or PowerBI is a plus. Experience presenting data findings in a readable and insight driven format. Experience building support decks. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Skills: Python, Spark, Data Engineer, Cloudera, Onpremise, Azure, Snlowfow, Kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title: Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Skills: Python, Apache Spark, Snowflake, data engineer, spark, kafka, azure, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less

Posted 1 week ago

Apply

7.5 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary: As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities: - Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. - Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. - Develop and maintain data pipelines using Databricks Unified Data Analytics Platform. - Design and implement data security and access controls using Databricks Unified Data Analytics Platform. - Troubleshoot and resolve issues related to data platform components using Databricks Unified Data Analytics Platform. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with other big data technologies such as Hadoop, Spark, and Kafka. - Strong understanding of data modeling and database design principles. - Experience with data security and access controls. - Experience with data pipeline development and maintenance. - Experience with troubleshooting and resolving issues related to data platform components. Additional Information: - The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. - This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. - Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position Overview Job Title: Data Engineer (ETL, Big Data, Hadoop, Spark, GCP) - AS Location: Pune, India Role Description Engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in Com in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Work as a senior developer for developing analytics algorithm on top of ingested data. Work as though senior developer for various data sourcing in Hadoop also GCP. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your Skills And Experience More than 6+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark ,SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Banking experience regulatory and cross product knowledge. Passionate about test driven development. Prior experience with release management tasks and responsibilities. Data visualization experience in tableau is good to have. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary: As a Software Development Engineer, you will be responsible for analyzing, designing, coding, and testing multiple components of application code using PySpark. Your typical day will involve performing maintenance, enhancements, and/or development work for one or more clients in Chennai. Roles & Responsibilities: - Design, develop, and maintain PySpark applications for one or more clients. - Analyze and troubleshoot complex issues in PySpark applications and provide solutions. - Collaborate with cross-functional teams to ensure timely delivery of high-quality software solutions. - Participate in code reviews and ensure adherence to coding standards and best practices. - Stay updated with the latest advancements in PySpark and related technologies. Professional & Technical Skills: - Must To Have Skills: Strong experience in PySpark. - Good To Have Skills: Experience in Big Data technologies such as Hadoop, Hive, and HBase. - Experience in designing and developing distributed systems using PySpark. - Strong understanding of data structures, algorithms, and software design principles. - Experience in working with SQL and NoSQL databases. - Experience in working with version control systems such as Git. Additional Information: - The candidate should have a minimum of 5 years of experience in PySpark. - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality software solutions. - This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. - Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role At Tesco, we are committed to providing the best for you.As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day.Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable.• Salary - Your fixed pay is the guaranteed pay as per your contract of employment.• Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually• Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy.• Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF.• Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws.• Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. • Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. • Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. • Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: - Identifying operational improvements and finding solutions by applying CI tools and techniques- Responsible for completing tasks and transactions within agreed KPI's- Knows and applies fundamental work theories/concepts/processes in own areas of workEngaging with business & functional partners to understand business priorities, ask relevant questions and scope same into a analytical solution document calling out how application of data science will improve decision making- In depth understanding of techniques to prepare the analytical data set leveraging multiple complex data set sources- Building Statistical models and ML algorithms with practitioner level competency- Writing structured, modularized & codified algorithms using Continuous Improvement principles (development of knowledge assets and reusable modules on GitHub, Wiki, etc) with expert competency- Building easy visualization layer on top of the algorithms in order to empower end-users to take decisions - this could be on a visualization platform (Tableau / Python) or through a recommendation set through PPTs- Working with the line manager to ensure application / consumption and also think beyond the immediate ask and spot opportunities to address the bigger business questions (if any) You will need -1-2 year experience in data science application in Retail or CPG Preferred- Functional experience: Marketing, Supply Chain, Customer, Merchandising, Operations, Finance or Digital Applied Math: Applied Statistics, Design of Experiments, Linear & Logistic Regression, Decision Trees, Forecasting, Optimization algorithms - Tech: SQL, Hadoop, Python, Tableau, MS Excel, MS Powerpoint- Soft Skills: Analytical Thinking & Problem solving, Storyboarding Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the Role: We are looking for a skilled Data Scientist to analyze large amounts of raw information to find patterns that will help improve our business. You will model complex problems and use your findings to develop data-driven solutions. Key Responsibilities: Collect, clean, and process large datasets from various sources Develop predictive models and machine learning algorithms Interpret data and analyze results using statistical techniques Visualize data insights for technical and non-technical audiences Collaborate with cross-functional teams to understand business needs and develop data solutions Monitor and optimize model performance Stay updated with the latest industry trends and techniques in data science Qualifications: Bachelor’s, Master’s, or Ph.D. in Data Science, Statistics, Computer Science, or related field Proven experience in data analysis, modeling, and machine learning Proficiency in programming languages such as Python, R, or SQL Experience with data visualization tools (e.g., Tableau, Power BI) Strong analytical and problem-solving skills Excellent communication skills Preferred Skills: Knowledge of big data technologies (e.g., Hadoop, Spark) Experience with cloud platforms AWS Familiarity with deep learning frameworks (TensorFlow, PyTorch) Show more Show less

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Required This position allows the successful incumbent to develop deep technical and broad commercial skills by being exposed to, and working on a wide variety of internal projects. Working as part of a large, sophisticated analytics team, this role will be required to: Manipulate and analyse data for the development and validation of predictive models in the areas of credit risk, demographics, marketing, fraud and ratings, in distributed computing environment Monitor Equifax’s cutting edge scorecards and risk tools Create business case and analytic insight materials that show the value of predictive modelling and visualise the insights Conduct ad hoc queries such as bureau insights, testing solutions Lead small projects and quality check own and others’ outputs guide junior data scientists Maintain documentation that supports Analytics platform of databases, data products and analytical solutions. What You’ll Do Project Leadership / Mentoring Mentor Analysts through development process Driving a focus on learning so that the team benefits from your coaching and other learning events that contribute to improved performance. Develop and maintain a network of influence. Demonstrate leadership by example. Data Analysis Leading statistical and data analysis, sometimes as project team leader. Demonstrate analytic leadership in the approaches used and proactively provide technical guidance and support to analysts. Demonstrate a high level of skill in programming required to perform analysis such as demonstrated experience with SQL, R, Python and Tableau on Cloud environments such as GCP or equivalent. Process and analyse large volumes of data in the development of insights. Prepare documentation for all work, to ensure that an audit trail is sufficient for a colleague to be able to quality review and/or repeat your analysis. Quality check your own and other analyst work output to ensure error-free delivery of information and analysis. Develop extensive repertoire of analytical methodologies and techniques for investigating data relationships and insights Adhere to Equifax project management standards and effective use of project management resources (methodology, templates, time recording systems and project office). Product and Service Contributing to analytic roadmap, product development and innovation Develop a detailed understanding of the full product and service offering available through Equifax as well as the market dynamics and requirements within the data driven marketing space. Proactively use this understanding to work with the team and stakeholders to enhance and expand Equifax’s data and insights assets. What Experience You Need BS degree in a STEM major or equivalent discipline 5-7 years of experience in a related analytical role Proven track record of designing and developing predictive models in real-world applications Experience with model performance evaluation and predictive model optimization for accuracy and efficiency Cloud certification strongly preferred Additional role-based certifications may be required depending upon region/BU requirements Process Improvement and Efficiencie Drive transition to more advanced modelling environments, utilising distributed computing and methodologies such as machine learning Demonstrate an understanding of business needs, making recommendations relating to new or improved data and insights assets. Supporting the other teams within the Analytics function by working with them to improve processes as needed. Systems and Processes Develop a detailed understanding of Equifax databases, data structures and core data analysis procedures, as well as maximising output through a Hadoop based file system. Develop understanding of best practice model management framework to ensure Equifax’s models remain optimal in terms of performance and stability Develop familiarity with Equifax’s documented project management methodology and resources (templates, time recording system, work scheduling etc.). What Could Set You Apart Passion for data science, data mining, machine learning and experience with big data architectures and methods A Master's degree in a quantitative field (Statistics, Mathematics, Economics) Cloud certification such as GCP strongly preferred Self Starter Excellent communicator / Client Facing Ability to work in fast paced environment Flexibility work across A/NZ time zones based on project needs We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Dear Candidate, Greetings from HCL Tech!! We are hiring Platform Engineer – Hadoop Production Support requirement Experience : 10+years Location : Chennai Available Postion : 1 Notice Period : Immediate Joiner /30days Primary skill : Advanced Python for data analysis and manipulation Secondary : Power BI+ Data science JD : Skill Requirements 1. Proficiency in data science methodologies and tools. 2. Strong knowledge and experience in utilizing power bi for data visualization. 3. Advanced programming skills in python for data analysis and manipulation. 4. Ability to work with large data sets and databases. If your interested share your below detail for processeding further. Name Contact Number Email ID Current Location Preferred Location Total Experience Relevant Experience Current Organisation Current CTC Expected CTC Notice Period Regards DurgaK Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Key Responsibilities • Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. • Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). • Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. • Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. • Ensure data quality and consistency by implementing validation and governance practices. • Work on data security best practices in compliance with organizational policies and regulations. • Automate repetitive data engineering tasks using Python scripts and frameworks. • Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications • Professional Experience: 5+ years of experience in data engineering or a related field. • Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3. • AWS Expertise: Hands-on experience with core AWS services for data engineering, such as AWS Glue for ETL/ELT, S3 for storage. • Redshift or Athena for data warehousing and querying. • Lambda for serverless compute. • Kinesis or SNS/SQS for data streaming. • IAM Roles for security. • Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases. • Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. • DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. • Version Control: Proficient with Git-based workflows. • Problem Solving: Excellent analytical and debugging skills. Optional Skills • Knowledge of data modeling and data warehouse design principles. • Experience with data visualization tools (e.g., Tableau, Power BI). • Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management Expertise around Data : Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes. File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About the Team: ISG Data Technology strives to provide measurable competitive advantage to our business by delivering high quality, innovative and cost effective reference data technology and operational; solutions in order to meet the needs of the business, our clients, our regulators and stakeholders. Olympus is a next generation Data Fabric to streamline data sources across ICG and enable industry-leading analytics, client, regulatory, surveillance, supervisory, risk & finance reporting and data science solutions that are Accurate, Reliable, Relevant, Consistent, Complete, Scalable, Timely, Secure, Nimble. Olympus is built on Big data platform and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Our team interfaces with a vast client base and works in close partnership with Operations, Development and other technology counterparts running the application production platform, providing quick resolutions and timely communications to their issues, and driving improvements to stability and efficiency practices to help us and the business succeed. Job Description: The Apps Support Sr Analyst is a seasoned professional role. Applies in-depth disciplinary knowledge, contributing to the development of new techniques and the improvement of processes and work-flow for the area or function. Integrates subject matter and industry expertise within a defined area. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the function and overall business. Evaluates moderately complex and variable issues with substantial potential impact, where development of an approach/taking of an action involves weighing various alternatives and balancing potentially conflicting situations using multiple sources of information. Requires good analytical skills in order to filter, prioritize and validate potentially complex and dynamic material from multiple sources. Strong communication and diplomacy skills are required. Regularly assumes informal/formal leadership role within teams. Involved in coaching and training of new recruits. Significant impact in terms of project size, geography, etc. by influencing decisions through advice, counsel and/or facilitating services to others in area of specialization. Work and performance of all teams in the area are directly affected by the performance of the individual. Responsibilities: The Application Support Senior Analyst provides technical and business support for users of Citi Applications. This includes providing quick resolutions to app issues, driving stability, efficiency and effectiveness improvements to help us and the business succeed. Maintains application systems that have completed the development stage and are running in the daily operations of the firm. Manages, maintains and supports applications and their operating environments, focusing on stability, quality and functionality against service level expectations. Start of day checks, continuous monitoring, and regional handover. Perform same day risk reconciliations Develop and maintain technical support documentation. Identifies ways to maximize the potential of the applications used Assess risk and impact of production issues and escalate to business and technology management in a timely manner. Ensures that storage and archiving procedures are in place and functioning correctly Formulates and defines scope and objectives for complex application enhancements and problem resolution Reviews and develops application contingency planning to ensure availability to users. Partners with appropriate development and production support areas to prioritize bug fixes and support tooling requirements. Participate in application releases, from development, testing and deployment into production. Engages in post implementation analysis to ensure successful system design and functionality. Considers implications of the application of technology to the current environment. Identifies risks, vulnerabilities and security issues; communicates impact. Ensures essential procedures are followed and helps to define operating standards and processes. Act as a liaison between users/traders, interfacing internal technology groups and vendors. Expected to be able to raise problems to appropriate technology and business teams, while adhering to Service Level Agreements. Acts as advisor or coach to new or lower level analysts. Provides evaluative judgment based on analysis of factual information in complicated and unique situations. Directly impacts the business by ensuring the quality of work provided by self and others; impacts own team and closely related work teams. Exhibits sound and comprehensive communication and diplomacy skills to exchange complex information. Active involvement in and ownership of Support Project items, covering Stability, Efficiency, and Effectiveness initiatives. Performs other duties and functions as assigned. Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5-8 years experience in an Application Support role. Experience installing, configuring or supporting business applications. Experience with some programming languages and willingness/ability to learn. Advanced execution capabilities and ability to adjust quickly to changes and re-prioritization Effective written and verbal communications including ability to explain technical issues in simple terms that non-IT staff can understand. Demonstrated analytical skills Issue tracking and reporting using tools Knowledge/ experience of problem Management Tools. Good all-round technical skills Effectively share information with other support team members and with other technology teams Ability to plan and organize workload Consistently demonstrates clear and concise written and verbal communication skills Ability to communicate appropriately to relevant stakeholders. Education: Bachelor’s/University degree or equivalent experience Additional Skill Set: Hadoop/Big Data Platform Working knowledge of various components and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Very good knowledge on analyzing the bottlenecks on the cluster - performance tuning, effective resource usage, capacity planning, investigating. Perform daily performance monitoring of the cluster - Implement best practices, ensure cluster stability and create/analyze performance metrics. Hands-on experience in supporting applications built on Hadoop. Linux 4 - 6 years of experience Database Good SQL experience in any of the RDBMS. Scheduler Autosys / CONTROL-M or other schedulers will be of added advantage. Programming Languages UNIX shell scripting, Python / PERL will be of added advantage. Microsoft Strong knowledge of Microsoft based operating systems Experience using/troubleshooting Office with emphasis on Word, Excel. Additional Skills (preferable) Other Applications Knowledge / working experience of ITRS Active Console/other monitoring tools Knowledge / working experience of Autosys/scheduler. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Support ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary Strategy As part of FM – Funds and Securities Services Technology, the developer is recruited to Develop and Deliver solution to support various initiatives that enables Operations to fulfil Client requirements. Business Technical Requirement Experience in Designing and Implementing Enterprise applications in Java/.Net Should have experience in Oracle /SQL server. Good knowledge on developing stored procedures using Oracle PL\SQL. Knowledge on Big Data concepts and tech stack such as Hadoop, Hive / Spark / Sqoop. Should be able to work on data extraction and data lake initiatives. DevOps (ADO, JIRA, Jenkins, Ansible, Github) exposure / experience. Knowledge on AWS/Azure Cloud native and VM concepts. Proficient in Container Proficiency in Oracle Sql, SqlServer and DBA. Working experience in solution design, capacity plan and sizing. Functional Requirements Experience in Fund Accounting, Transfer Agency and (or) Hedge Fund Administration. Knowledge of market instruments and conventions. Specialism within Fund Accounting/ client reporting/ investment operations. Hands –on in MultiFonds Fund Administration and Global Investor products or equivalent Fund Accounting Products Key Responsibilities Risk Management Proactively identify and track Obsolescence of Hardware and Software components, including OS or CVE patches, for Funds Apps and interfaces Governance Develop and Deliver as per SDF mandated process. Follow Release management standards and tools to deploy the deliverable to production. Handover to Production Support as per process Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the Agile Squad to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Key stakeholders FM – Funds and Securities Services Operations, Technology and Production Support. Other Responsibilities Coordinates between Product Vendor and Business Stakeholders for requirement finalization. Coordinates between Product Vendor and Business Stakeholders for requirement finalization. Understand Functional Requirement. Provide solutions by developing the required components or reports. Unit test and support SIT and UAT. Follow SCB change management process to promote the developed components to production. Proactively input to solution design including architectural view of our technology landscape, experience on data optimisation solutions (DB table mapping, data logic, development code design, etc.) Participates in identification of non-functional requirements like security requirements, performance objectives. Coordinates between various internal support teams. Picks up new technologies with ease, solves complex technical problems and multitasks between different projects Follow SCB change management process to promote the developed components to production. Proactively input to solution design including architectural view of our technology landscape. Participates in identification of non-functional requirements like security requirements, performance objectives. Coordinates between various internal support teams. Picks up new technologies with ease, solves complex technical problems and multitasks between different projects Skills And Experience Windows/Linux WebLogic, citrix, MQ, Solace AWS/Azure Cloud native and VM concepts PL\SQL Domain Experience in Fund Accounting, Transfer Agency and (or) Hedge Fund Administration DevOps Qualifications Degree in Computer Science, MCA or Equivalent. 7 to 10 years of prior work experience in stated Technology About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less

Posted 1 week ago

Apply

13.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Summary about Organization A career in our Advisory Acceleration Center is the natural extension of PwC’s leading global delivery capabilities. The team consists of highly skilled resources that can assist in the areas of helping clients transform their business by adopting technology using bespoke strategy, operating model, processes and planning. You’ll be at the forefront of helping organizations around the globe adopt innovative technology solutions that optimize business processes or enable scalable technology. Our team helps organizations transform their IT infrastructure, modernize applications and data management to help shape the future of business. An essential and strategic part of Advisory's multi-sourced, multi-geography Global Delivery Model, the Acceleration Centers are a dynamic, rapidly growing component of our business. The teams out of these Centers have achieved remarkable results in process quality and delivery capability, resulting in a loyal customer base and a reputation for excellence. . Job Description Senior Data Architect with experience in design, build, and optimization of complex data landscapes and legacy modernization projects. The ideal candidate will have deep expertise in database management, data modeling, cloud data solutions, and ETL (Extract, Transform, Load) processes. This role requires a strong leader capable of guiding data teams and driving the design and implementation of scalable data architectures. Key areas of expertise include Design and implement scalable and efficient data architectures to support business needs. Develop data models (conceptual, logical, and physical) that align with organizational goals. Lead the database design and optimization efforts for structured and unstructured data. Establish ETL pipelines and data integration strategies for seamless data flow. Define data governance policies, including data quality, security, privacy, and compliance. Work closely with engineering, analytics, and business teams to understand requirements and deliver data solutions. Oversee cloud-based data solutions (AWS, Azure, GCP) and modern data warehouses (Snowflake, BigQuery, Redshift). Ensure high availability, disaster recovery, and backup strategies for critical databases. Evaluate and implement emerging data technologies, tools, and frameworks to improve efficiency. Conduct data audits, performance tuning, and troubleshooting to maintain optimal performance Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 13+ years of experience in data modeling, including conceptual, logical, and physical data design. 5 – 8 years of experience in cloud data lake platforms such as AWS Lake Formation, Delta Lake, Snowflake or Google Big Query. Proven experience with NoSQL databases and data modeling techniques for non-relational data. Experience with data warehousing concepts, ETL/ELT processes, and big data frameworks (e.g., Hadoop, Spark). Hands-on experience delivering complex, multi-module projects in diverse technology ecosystems. Strong understanding of data governance, data security, and compliance best practices. Proficiency with data modeling tools (e.g., ER/Studio, ERwin, PowerDesigner). Excellent leadership and communication skills, with a proven ability to manage teams and collaborate with stakeholders. Preferred Skills Experience with modern data architectures, such as data fabric or data mesh. Knowledge of graph databases and modeling for technologies like Neo4j. Proficiency with programming languages like Python, Scala, or Java. Understanding of CI/CD pipelines and DevOps practices in data engineering. Show more Show less

Posted 1 week ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies