Jobs
Interviews

22 Map Reduce Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Job title: Data Scientist - Deputy Manager Your role: u2022 Implements solutions to problems using data analysis, data mining, optimization tools and machine learning techniques and statistics u2022 Build data-science and technology based algorithmic solutions to address business needs u2022 Design large scale models using Regression, Linear Models Family, Time-series models. u2022 Drive the collection of new data and the refinement of existing data sources u2022 Analyze and interpret the results of analytics experiments u2022 Applies a global approach to analytical solutions-both within a business area and across the enterprise u2022 Ability to use data for Exploratory, descriptive, Inferential, Prescriptive, and Advanced Analytics u2022 Ability to share dashboards, reports, and Analytical insights from data u2022 Experience of having done visualization on large datasets u2013 Preferred u2013 added advantage Technical Knowledge and Skills required u2022 Experience solving analytical problems using quantitative approaches u2022 Passion for empirical research and for answering hard questions with data u2022 Ability to manipulate and analyze complex, high-volume, high-dimensionality data from varying sources u2022 Ability to apply a flexible analytic approach that allows for results at varying levels of precision u2022 Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner u2022 Expert knowledge of an analysis tool such as Pyspark and Python. u2022 Experience working with large data sets, experience working with distributed computing tools a plus (Map/Reduce, Hadoop, Hive, etc.) u2022 Familiarity with relational databases and SQL You're the right fit if: (4 x bullets max) 5 - 8 years of experience with engineering or equivalent background Experience with solving analytical problems using quantitative approaches u2022 Ability to manipulate and analyze complex, high-volume, high-dimensionality data from varying sources u2022 Ability to apply a flexible analytic approach that allows for results at varying levels of precision u2022 Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner u2022 Expert knowledge of an analysis tool such as R, Python u2022 Experience working with large data sets, experience working with distributed computing tools a plus (Map/Reduce, Hadoop, Hive, etc.) u2022 Familiarity with relational databases and SQL How we work together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. Onsite roles require full-time presence in the companyu2019s facilities. Field roles are most effectively done outside of the companyu2019s main facilities, generally at the customersu2019 or suppliersu2019 locations. Indicate if this role is an office/field/onsite role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. u2022 Learn more about . u2022 Discover . u2022 Learn more about . If youu2019re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care .

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

nagpur, maharashtra

On-site

The position is for a Full Time job with rotational shifts based in Nagpur, Pune, or Bangalore. We are looking to fill 4 positions with candidates who have 5 to 8 years of experience. As an AWS Data Engineer, you will be responsible for leading development activities for the Data engineering team. You will collaborate with other teams such as application management and product delivery, working closely with technical leads, product managers, and support teams. Your role will involve providing guidance to the development, support, and product delivery teams. Additionally, you will lead the implementation of tools and technologies to drive cost-efficient architecture and infrastructure. On the other hand, as an Azure Data Engineer, your responsibilities will include creating and maintaining optimal data pipelines, assembling large, complex data sets that meet business requirements, and identifying opportunities for process improvements and automation. You will develop data tools for analytics and data science teams to optimize product performance and build analytics tools for actionable insights into business metrics. Collaboration with stakeholders from various teams will also be essential to address data-related technical issues and support data infrastructure needs. The ideal candidate for the AWS Data Engineer position should have experience with AWS services like S3, Glue, SNS, SQS, Lambda, Redshift, and RDS. Proficiency in programming, especially in Python, is required, along with strong skills in designing complex SQL queries and optimizing data retrieval. Knowledge of spark, Pyspark, Hadoop, Hive, and spark-Sql is also essential. For the Azure Data Engineer role, candidates should have experience with Azure cloud services and developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce. Familiarity with stream-processing systems such as Spark-Streaming and Strom will be advantageous.,

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Data Engineer Trainer/Big Data Trainer, you will be responsible for imparting knowledge and training on various technical aspects related to data engineering and big data. Your key responsibilities will include expertise in Data Mining and ETL Operations/Tools. It is crucial to have a deep understanding and knowledge of HDFS, Hadoop System, Map Reduce, RDD, Spark DataFrame, PySpark along with related concepts. You should also have experience in using Business Intelligence Tools such as Tableau, Power BI, and Big Data Frameworks like Hadoop and Spark. Proficiency in Pig, Hive, Sqoop, and Kafka is essential for this role. Knowledge of AWS and/or Azure, especially with Big Data Stack, will be an added advantage. You should possess a high level of proficiency in Standard Database skills like SQL, NoSQL Databases, Data Preparation, Cleaning, and Wrangling/Munging. Having a strong foundation and advanced level understanding of Statistics, R Programming, Python, and Machine Learning is necessary to excel in this role.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

As an exceptionally skilled individual, you will be part of a dedicated team at TNS, collaborating daily to contribute to the success of the organization. If you are driven by excellence in both professional and personal aspects, this is the place for you! The role entails being a Java and/or Scala developer with expertise in Big Data tools and frameworks. You should have 8 to 12 years of proven experience in Java and/or Scala development. Your responsibilities will include hands-on work with prominent Big Data tools like Hadoop, Spark, Map Reduce, Hive, and Impala. Additionally, you should possess a deep understanding of streaming technologies such as Kafka and/or Spark Streaming. Strong familiarity with design, development, and utilization of NoSQL databases like HBase, Druid, and Solr is crucial. Experience in working with public cloud platforms like AWS and Azure is also expected. To be considered for this position, you should hold a BS/B.E./B.Tech degree in Computer Science or a related field. Desirable qualifications for this role include proficiency in object-oriented analysis, design patterns using Java/J2EE technologies, and expertise in Restful Web Services and data modeling. Familiarity with build and development tools like Maven, Gradle, and Jenkins, as well as experience with test frameworks such as JUnit and Mockito, are advantageous. Knowledge of Spring Framework, MVC architectures, and ORM frameworks like Hibernate would be a bonus. If you have a genuine passion for technology, a thirst for personal development, and a desire for growth opportunities, we invite you to discover the exciting world of TNS!,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

The role involves working as part of the Infosys delivery team with a focus on quality assurance issue resolution and ensuring high customer satisfaction. Your responsibilities will include interfacing with clients, understanding requirements, creating and reviewing designs, validating architecture, and providing high levels of service offerings in the technology domain. Additionally, you will be involved in project estimation, solution delivery, technical risk planning, code reviews, unit test plan reviews, team leadership, knowledge management, and adherence to organizational guidelines and processes. As a key contributor, you will play a significant role in developing efficient programs and systems to support clients in their digital transformation journey. If you have the skills and expertise in AWS EMR, Big Data, Data Processing, and Map Reduce, and are passionate about delivering optimized high-quality code deliverables, this opportunity is tailored for you. Join us and be a part of helping our clients navigate their digital transformation journey.,

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process - through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. Peer code reviews Requirements To be successful in this role, you should meet the following requirements: Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+. Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services) Sound knowledge on working Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects Experience with time-series/analytics dB's such as Elastic search. Experience with scheduling tools such as Airflow, Control-M. Understanding or experience of Cloud design patterns Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Experience with developing Hive QL, UDF's for analysing semi structured/structured datasets Location : Pune and Bangalore You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSDI

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process - through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. Peer code reviews Requirements To be successful in this role, you should meet the following requirements: Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+. Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services) Sound knowledge on working Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects Experience with time-series/analytics dB's such as Elastic search. Experience with scheduling tools such as Airflow, Control-M. Understanding or experience of Cloud design patterns Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Experience with developing Hive QL, UDF's for analysing semi structured/structured datasets You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSDI

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process - through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. Peer code reviews Requirements To be successful in this role, you should meet the following requirements: Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+. Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services) Sound knowledge on working Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects Experience with time-series/analytics dB's such as Elastic search. Experience with scheduling tools such as Airflow, Control-M. Understanding or experience of Cloud design patterns Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Experience with developing Hive QL, UDF's for analysing semi structured/structured datasets You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSDI

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Position Summary... Drives the execution of multiple business plans and projects by identifying customer and operational needs; developing and communicating business plans and priorities; removing barriers and obstacles that impact performance; providing resources; identifying performance standards; measuring progress and adjusting performance accordingly; developing contingency plans; and demonstrating adaptability and supporting continuous learning. Provides supervision and development opportunities for associates by selecting and training; mentoring; assigning duties; building a team-based work environment; establishing performance expectations and conducting regular performance evaluations; providing recognition and rewards; coaching for success and improvement; and promoting a belonging mindset in the workplace. Promotes and supports company policies, procedures, mission, values, and standards of ethics and integrity by training and providing direction to others in their use and application; ensuring compliance with them; and utilizing and supporting the Open Door Policy. Ensures business needs are being met by evaluating the ongoing effectiveness of current plans, programs, and initiatives; consulting with business partners, managers, co-workers, or other key stakeholders; soliciting, evaluating, and applying suggestions for improving efficiency and cost-effectiveness; and participating in and supporting community outreach events. What you&aposll do... About the Team: Centroid team at Walmart serves as the backbone of Walmart&aposs end-to-end supply chain strategy. They are entrusted with the task of designing and implementing a long-term supply chain strategy that uses advanced data analytics and data science. Their primary objective is to ensure that Walmart provides top-tier customer service while supporting the increasing demand over time and simultaneously operating at low and efficient costs. The team utilizes sophisticated data analysis methods to understand patterns, identify potential bottlenecks, and predict future trends. This enables them to optimize processes, make informed business decisions, and enhance overall operational efficiency. One of Centroids key responsibilities also includes the creation of a Digital Twin Simulation platform for Walmart&aposs supply chain. This innovative tool allows the team to test and validate all future strategies and tactical decisions before they are launched operationally. It also enables a deep assessment of long-term strategic sensitivity. In essence, the Centroid teams work is integral to ensuring Walmart&aposs supply chain is robust, flexible, and capable of adapting to ever-changing market demands. Their work helps to keep Walmart at the forefront of retail supply chain management, delivering exceptional service to customers while maintaining efficient operational costs. What You&aposll do: Develop and manage advanced data analytics models to optimize supply chain strategies, balancing customer satisfaction with operational cost and asset efficiency. Leverage data analytics to identify opportunities for improvement and drive impactful results through collaboration with cross-functional teams. Establish relationships across Walmart functional areas to identify best practices, solicit data/input, coordinate interdisciplinary initiatives, and rally support for data-driven recommendations. Secure alignment and support from relevant business partners and management for data-centric projects, leading discussions to drive necessary change. Utilize all available data resources effectively to ensure successful project outcomes. Communicate data insights clearly and persuasively through emails, verbal discussions, and presentations, tailoring communication methods to the audience for maximum impact. Collaborate with multiple supply chain business teams to proactively identify, assess, and leverage cost-saving and service improvement opportunities through advanced data analytics. Utilize advanced analytics models to derive insights that will inform policy design across various supply chain areas, laying out multiple scenarios and performing sensitivity analysis. Collaborate with Data Scientists and Engineers to productionize and scale advanced analytics models as needed. Develop and present compelling data-driven narratives/documents/visuals to influence key stakeholders in their decision-making. Provide coaching and training support to other team members in the supply chain area, leveraging your expertise in advanced data analytics. What You&aposll bring: Strong analytical acumen with technical expertise in Advanced Data Analytics and modelling Expert in SQL, - BigQuery like cloud data platforms. Expert in programming in Python, (or R) Experience in using data visualization tools like Tableau and Looker and be able to drive powerful insights. Experience working with large data sets and distributed computing tools (Map/Reduce, Hadoop, Hive, and/or Spark) Experience in operating from a cloud environment such as Google Could Platform or Microsoft Azure. Ability to work in a fast-paced, iterative development environment. Strong communication skills, both written and verbal, plus ability to work with cross functional teams of technical and non-technical members. Strong ability to understand the business and have good stakeholder management capabilities. Experience of working in cross-functional environment and leading or mentoring teams. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That&aposs what we do at Walmart Global Tech. Were a team of software engineers, data scientists, cybersecurity experts and service professionals within the worlds leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is '&aposeveryone included.'' By fostering a workplace culture where everyone is and feels included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, were able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions while being welcoming of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 4 years' experience in an analytics related field. Option 2: Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 2 years' experience in an analytics related field. Option 3: 6 years' experience in an analytics or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2272432 Show more Show less

Posted 3 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You have a great opportunity to join as a Data Software Engineer with 5-12 years of experience in Big Data & Data related technology. We are looking for candidates with an expert level understanding of distributed computing principles and hands-on experience in Apache Spark along with proficiency in Python. You should also have experience with technologies like Hadoop, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, Hive, Impala, and integration of data from various sources such as RDBMS, ERP, and Files. Additionally, knowledge of NoSQL databases, ETL techniques, SQL queries, joins, stored procedures, relational schemas, and performance tuning of Spark Jobs is required. Moreover, you must have experience with native Cloud data services like AZURE Databricks and the ability to lead a team efficiently. Familiarity with AGILE methodology and designing/implementing Big data solutions would be an added advantage. This full-time position is based in Hyderabad and requires candidates who are available for face-to-face interactions. If you meet these requirements and are passionate about working with cutting-edge technologies in the field of Big Data, we would love to hear from you.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Software Engineer at KG Invicta Services Pvt Ltd, you will leverage your 5-12 years of experience in Big Data & Data-related technologies to drive impactful solutions. Your expertise in distributed computing principles and Apache Spark, coupled with hands-on programming skills in Python, will be instrumental in designing and implementing efficient Big Data solutions. You will demonstrate proficiency in a variety of tools and technologies including Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, Spark-Streaming, Kafka, RabbitMQ, Hive, Impala, and NoSQL databases such as HBase, Cassandra, and MongoDB. Your ability to integrate data from diverse sources like RDBMS, ERP, and files, along with knowledge of ETL techniques and frameworks, will ensure seamless data processing and analysis. Performance tuning of Spark jobs, familiarity with Cloud data services like AWS and Azure Databricks, and the capability to lead a team effectively will be key aspects of your role. Your expertise in SQL queries, joins, stored procedures, and relational schemas will contribute to the optimization of data querying processes. Your experience with AGILE methodology and a deep understanding of Big Data querying tools will enable you to contribute significantly to the development and enhancement of stream-processing systems. You will collaborate with cross-functional teams to deliver high-quality solutions that meet business requirements. If you are passionate about leveraging data to drive innovation and possess a strong foundation in Spark, Python, and Cloud technologies, we invite you to join our team as a Data Software Engineer. This is a full-time position with a day shift schedule, and the work location is in person. Category: ML/AI Engineers, Data Scientist, Software Engineer, Data Engineer Expertise: Python (5 Years), AWS (3 Years), Apache Spark (5 Years), PySpark (3 Years), GCP (3 Years), Azure (3 Years), Apache Kafka (3 Years),

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have about 3-4 years of strong software development experience in a product company, with a minimum of two years of hands-on experience in each of the following skills: As a Key Account Manager for Telecom (Mobility) field, you should have good contacts and be able to liaise effectively with telecom clients. Your responsibilities will involve working on large scale distributed and highly scalable Big Data processing systems. You must have hands-on experience in tools such as Hadoop, HBase, Map Reduce, Hive, and Big Data SQL. You should be proficient in developing software that can scale for large volume batch and online systems, and have experience in writing code to process large amounts of structured and unstructured data using MapReduce/Spark or batch processing systems. Proficiency in solving problems using an object-oriented programming language, preferably Java, is required. Additionally, you should have good experience and exposure to data modeling, querying, and optimization for handling big table data stores. A working knowledge of J2EE technologies, exposure to Analytics, and proficiency in Python, Scala, or a functional language is preferred. Experience in R would be a plus. You must possess excellent analytical capability and problem-solving abilities. Experience in working with public and private clouds, *nix environments, scripting, and other toolsets is also necessary. Preferred qualifications include experience in Agile processes with smaller/quick releases, exposure to predictive analytics and machine learning, and familiarity with virtual environments like Virtual Box. Any experience with Test Driven Development, continuous integration, and release management will be considered a great advantage for this role.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Software Engineer, you will be responsible for utilizing your 5-12 years of experience in Big Data & Data-related technologies to contribute to the success of projects in Chennai and Coimbatore in a Hybrid work mode. You should possess an expert level understanding of distributed computing principles and a strong knowledge of Apache Spark, with hands-on programming skills in Python. Your role will involve working with technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming to build stream-processing systems. You should have a good grasp of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources including RDBMS, ERP, and Files. Experience with NoSQL databases such as HBase, Cassandra, MongoDB, and knowledge of ETL techniques and frameworks will be essential for this role. You will be tasked with performance tuning of Spark Jobs, working with AZURE Databricks, and leading a team efficiently. Additionally, your expertise in designing and implementing Big Data solutions, along with a strong understanding of SQL queries, joins, stored procedures, and relational schemas will be crucial. As a practitioner of AGILE methodology, you will play a key role in the successful delivery of data-driven projects.,

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies. Your expertise should include a deep understanding of distributed computing principles and strong knowledge of Apache Spark. Proficiency in Python programming is required, along with experience using technologies such as Hadoop v2, Map Reduce, HDFS, Sqoop, Apache Storm, and Spark-Streaming for building stream-processing systems. You should have a good understanding of Big Data querying tools like Hive and Impala, as well as experience in integrating data from various sources such as RDBMS, ERP, and Files. Knowledge of SQL queries, joins, stored procedures, and relational schemas is essential. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is also expected. The role requires performance tuning of Spark Jobs, experience with AZURE Databricks, and the ability to efficiently lead a team. Designing and implementing Big Data solutions, as well as following AGILE methodology, are key aspects of this position.,

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At o9 Solutions, our mission is clear: be the Most Valuable Platform (MVP) for enterprises. With our AI-driven platform the o9 Digital Brain we integrate global enterprises siloed planning capabilities, helping them capture millions and, in some cases, billions of dollars in value leakage. But our impact doesnt stop there. Businesses that plan better and faster also reduce waste, which drives better outcomes for the planet, too. We&aposre on the lookout for the brightest, most committed individuals to join us on our mission. Along the journey, well provide you with a nurturing environment where you can be part of something truly extraordinary and make a real difference for companies and the plane t What youll do for us: Apply a variety of machine learning techniques (clustering, regression, ensemble learning, neural nets, time series, optimizations etc.) to their real-world advantages/drawbacks Develop and/or optimize models for demand sensing/forecasting, optimization (Heuristic, LP, GA etc), Anomaly detection, Simulation and stochastic models, Market Intelligence etc. Use latest advancements in AI/ML to solve business problems Analyze problems by synthesizing complex information, evaluating alternate methods, and articulating the result with the relevant assumptions/reasons Application of common business metrics (Forecast Accuracy, Bias, MAPE) and the ability to generate new ones as needed. Develop or optimize modules to call web services for real time integration with externa systems Work collaboratively with Clients, Project Management, Solution Architects, Consultants and Data Engineers to ensure successful delivery of o9 projects What youll have: Experience: 4+ Years of experience in time series forecasting in scale using heuristic-based hierarchical best-fit models using algorithms like exponential smoothing, ARIMA, prophet and custom parameter tuning. Experience in applied analytical methods in the field of Supply chain and planning, like demand planning, supply planning, market intelligence, optimal assortments/pricing/inventory etc. Should be from a statistical background. Education: Bachelors Degree in Computer Science, Mathematics, Statistics, Economics, Engineering or related field Languages: Python and/or R for Data Science Skills: Deep Knowledge of statistical and machine learning algorithms, building scalable ML frameworks, identifying and collecting relevant input data, feature engineering, tuning, and testing. Characteristics: Independent thinkers Strong presentation and communications skills We really value team spirit: Transparency and frequent communication is key. At o9, this is not limited by hierarchy, distance, or function. Nice to have: Experience with SQL, databases and ETL tools or similar is optional but preferred Exposure to distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, or related Big Data technologies Experience with Deep Learning frameworks such as Keras, Tensorflow or PyTorch is preferable Experience in implementing planning applications will be a plus Understanding of Supply Chain Concepts will be preferable Masters Degree in Computer Science, Applied Mathematics, Statistics, Engineering, Business Analytics, Operations, or related field What well do for you Competitive salary with stock options to eligible candidates Flat organization: With a very strong entrepreneurial culture (and no corporate politics) Great people and unlimited fun at work Possibility to make a difference in a scale-up environment. Opportunity to travel onsite in specific phases depending on project requirements. Support network: Work with a team you can learn from everyday. Diversity: We pride ourselves on our international working environment. Work-Life Balance: https://youtu.be/IHSZeUPATBAfeature=shared Feel part of A team: https://youtu.be/QbjtgaCyhesfeature=shared How the process works Apply by clicking the button below Youll be contacted by our recruiter, wholl fill you in on all things at o9, give you some background about the role and get to know you. Theyll contact you either via video call or phone call - whatever you prefer. During the interview phase, you will meet with technical panels for 60 minutes. The recruiter will contact you after the interview to let you know if wed like to progress your application. We will have 2 rounds of Technical discussion followed by a Hiring Manager discussion. Our recruiter will let you know if youre the successful candidate. Good luck! More about us With the latest increase in our valuation from $2.7B to $3.7B despite challenging global macroeconomic conditions, o9 Solutions is one of the fastest-growing technology companies in the world today. Our mission is to digitally transform planning and decision-making for the enterprise and the planet. Our culture is high-energy and drives us to aim 10x in everything we do. Our platform, the o9 Digital Brain, is the premier AI-powered, cloud-native platform driving the digital transformations of major global enterprises including Google, Walmart, ABInBev, Starbucks and many others. Our headquarters are located in Dallas, with offices in Amsterdam, Paris, London, Barcelona, Madrid, Sao Paolo, Bengaluru, Tokyo, Seoul, Milan, Stockholm, Sydney, Shanghai, Singapore an d Munich. o9 is an equal opportunity employer and seeks applicants of diverse backgrounds and hires without regard to race, colour, gender, religion, national origin, citizenship, age, sexual orientation or any other characteristic protected by law Show more Show less

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Job Description: We are looking for a skilled PySpark Developer having 4-5 or 2-3 years of experience to join our team. As a PySpark Developer, you will be responsible for developing and maintaining data processing pipelines using PySpark, Apache Spark's Python API. You will work closely with data engineers, data scientists, and other stakeholders to design and implement scalable and efficient data processing solutions. Bachelor's or Master's degree in Computer Science, Data Science, or a related field is required. The ideal candidate should have strong expertise in the Big Data ecosystem including Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. The candidate should be below 35 years of age and have experience in designing, developing, and maintaining PySpark data processing pipelines to process large volumes of structured and unstructured data. Additionally, the candidate should collaborate with data engineers and data scientists to understand data requirements and design efficient data models and transformations. Optimizing and tuning PySpark jobs for performance, scalability, and reliability is a key responsibility. Implementing data quality checks, error handling, and monitoring mechanisms to ensure data accuracy and pipeline robustness is crucial. The candidate should also develop and maintain documentation for PySpark code, data pipelines, and data workflows. Experience in developing production-ready Spark applications using Spark RDD APIs, Data frames, Datasets, Spark SQL, and Spark Streaming is required. Strong experience of HIVE Bucketing and Partitioning, as well as writing complex hive queries using analytical functions, is essential. Knowledge in writing custom UDFs in Hive to support custom business requirements is a plus. If you meet the above qualifications and are interested in this position, please email your resume, mentioning the position applied for in the subject column at: careers@cdslindia.com.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

The Conversational AI team at Walmart is responsible for building and deploying core AI assistant experiences across Walmart, catering to millions of active users globally. As a Staff Data Scientist, you will play a crucial role in leading the evolution of the AI assistant platform by developing highly scalable Generative AI systems and infrastructure. This hands-on leadership position requires expertise in machine learning, ASR, large-scale distributed systems, multi-modal LLMs, and more. Your responsibilities will include partnering with key business stakeholders to drive the development and planning of proof of concepts and production AI solutions within the Conversational AI space. You will be involved in translating business requirements into strategies, initiatives, and projects aligned with business objectives. Designing, testing, and deploying cutting-edge AI solutions at scale to enhance customer experiences will be a key aspect of your role. Collaboration with applied scientists, ML engineers, software engineers, and product managers will be essential in developing the next generation of AI assistant experiences. Staying updated on industry trends in Generative AI, Speech/Video processing, and AI assistant architecture patterns will be crucial. Additionally, providing technical leadership, guidance, and mentorship to a skilled team of data scientists, as well as driving innovation through problem-solving cycles and research publication, are integral parts of this role. To qualify for this position, you should have a Master's degree with 8+ years or a Ph.D. with 6+ years of relevant experience in Computer Science, Statistics, Mathematics, or a related field. A strong track record in a data science tech lead role, extensive experience in designing and deploying AI products, and expertise in machine learning, NLP, speech processing, image processing, and deep learning models are required. Proficiency in industry tools and technologies, a deep interest in generative AI, and exceptional decision-making skills will be assets in this role. Furthermore, you should possess a thorough understanding of distributed technologies, public cloud platforms, and big data systems, along with experience working with geographically distributed teams. Business acumen, research acumen with publications in top-tier AI conferences, and strong programming skills in Python and Java are also essential qualifications for this position. Join the Conversational AI team at Walmart Global Tech, where you will have the opportunity to make a significant impact, innovate at scale, and shape the future of retail while working in a collaborative and inclusive environment.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

The opportunity available at EY is for a Bigdata Engineer based in Pune, requiring a minimum of 4 years of experience. As a key member of the technical team, you will collaborate with Engineers, Data Scientists, and Data Users in an Agile environment. Your responsibilities will include software design, Scala & Spark development, automated testing, promoting development standards, production support, troubleshooting, and liaising with BAs to ensure correct interpretation and implementation of requirements. You will be involved in implementing tools and processes, handling performance, scale, availability, accuracy, and monitoring. Additionally, you will participate in regular planning and status meetings, provide input in Sprint reviews and retrospectives, and contribute to system architecture and design. Peer code reviews will also be a part of your responsibilities. Key technical skills required for this role include Scala or Java development and design, experience with technologies such as Apache Hadoop, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, and ETL frameworks. Hands-on experience in building data pipelines using Hadoop components and familiarity with version control tools, automated deployment tools, and requirement management is essential. Knowledge of big data modelling techniques and debugging code issues are also necessary. Desired qualifications include experience with Elastic search, scheduling tools like Airflow and Control-M, understanding of Cloud design patterns, exposure to DevOps & Agile Project methodology, and Hive QL development. The ideal candidate will possess strong communication skills, the ability to collaborate effectively, mentor developers, and lead technical initiatives. A Bachelors or Masters degree in Computer Science, Engineering, or a related field is required. EY is looking for individuals who can work collaboratively across teams, solve complex problems, and deliver practical solutions while adhering to commercial and legal requirements. The organization values agility, curiosity, mindfulness, positive energy, adaptability, and creativity in its employees. EY offers a personalized Career Journey, ample learning opportunities, and resources to help individuals understand their roles and opportunities better. EY is committed to being an inclusive employer that focuses on achieving a balance between delivering excellent client service and supporting the career growth and wellbeing of its employees. As a global leader in assurance, tax, transaction, and advisory services, EY believes in providing training, opportunities, and creative freedom to its employees to help build a better working world. The organization encourages personal and professional growth, offering motivating and fulfilling experiences to help individuals reach their full potential.,

Posted 2 months ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job Description: Job description Skills AWS EMR Key Responsibilities: A day in the life of an Infoscion As part of the Infosys delivery team your primary role would be to interface with the client for quality assurance issue resolution and ensuring high customer satisfaction You will understand requirements create and review designs validate the architecture and ensure high levels of service offerings to clients in the technology domain You will participate in project estimation provide inputs for solution delivery conduct technical risk planning perform code reviews and unit test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge management and adherence to the organizational guidelines and processes You would be a key contributor to building efficient programs systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology Big Data Data Processing Map Reduce Preferred Skills: Technology->Big Data - Data Processing->Map Reduce

Posted 3 months ago

Apply

7 - 11 years

50 - 60 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 4 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies