Jobs
Interviews

627 Mapreduce Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 8 years

3 - 7 Lacs

Chennai

Work from Office

About The Role Role Purpose The purpose of this role is to prepare test cases and perform testing of the product/ platform/ solution to be deployed at a client end and ensure its meet 100% quality assurance parameters. ? Do Instrumental in understanding the test requirements and test case design of the product Authoring test planning with appropriate knowledge on business requirements and corresponding testable requirements Implementation of Wipro's way of testing using Model based testing and achieving efficient way of test generation Ensuring the test cases are peer reviewed and achieving less rework Work with development team to identify and capture test cases, ensure version Setting the criteria, parameters, scope/out-scope of testing and involve in UAT (User Acceptance Testing) Automate the test life cycle process at the appropriate stages through vb macros, scheduling, GUI automation etc To design and execute the automation framework and reporting Develop and automate tests for software validation by setting up of test environments, designing test plans, developing test cases/scenarios/usage cases, and executing these cases Ensure the test defects raised are as per the norm defined for project / program / account with clear description and replication patterns Detect bug issues and prepare file defect reports and report test progress No instances of rejection / slippage of delivered work items and they are within the Wipro / Customer SLA's and norms Design and timely release of test status dashboard at the end of every cycle test execution to the stake holders Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ? Status Reporting and Customer Focus on an ongoing basis with respect to testing and its execution Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc On time deliveries - WSRs, Test execution report and relevant dashboard updates in Test management repository Updates of accurate efforts in eCube, TMS and other project related trackers Timely Response to customer requests and no instances of complaints either internally or externally ? NoPerformance ParameterMeasure1Understanding the test requirements and test case design of the productEnsure error free testing solutions, minimum process exceptions, 100% SLA compliance, # of automation done using VB, macros2Execute test cases and reportingTesting efficiency & quality, On-Time Delivery, Troubleshoot queries within TAT, CSAT score ? Mandatory Skills: Scala programming. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

4 - 8 years

10 - 14 Lacs

Bengaluru

Work from Office

About the role: We are seeking a highly skilled Domain Expert in Condition Monitoring to join our team and play a pivotal role in advancing predictive maintenance strategies for electrical equipment. This position focuses on leveraging cutting-edge machine learning and data analytics techniques to design and implement scalable solutions that optimize maintenance processes, enhance equipment reliability, and support operational efficiency. As part of this role, you will apply your expertise in predictive modeling, supervised and unsupervised learning, and advanced data analysis to uncover actionable insights from high-dimensional datasets. You will collaborate with cross-functional teams to translate business requirements into data-driven solutions that surpass customer expectations. If you have a passion for innovation and sustainability in the industrial domain, this is an opportunity to make a meaningful impact. Key Responsibilities: Develop and implement predictive maintenance models using a variety of supervised and unsupervised learning techniques. Analyze high-dimensional datasets to identify patterns and correlations that can inform maintenance strategies. Utilize linear methods for regression and classification, as well as advanced techniques such as splines, wavelets, and kernel methods. Conduct model assessment and selection, focusing on bias, variance, overfitting, and cross-validation. Apply ensemble learning techniques, including Random Forest and Boosting, to improve model accuracy and robustness. Implement structured methods for supervised learning, including additive models, trees, neural networks, and support vector machines. Explore unsupervised learning methods such as cluster analysis, principal component analysis, and self-organizing maps to uncover insights from data. Engage in directed and undirected graph modeling to represent and analyze complex relationships within the data. Collaborate with cross-functional teams to translate business requirements into data-driven solutions. Communicate findings and insights to stakeholders, providing actionable recommendations for maintenance optimization. Mandatory Requirements: Master"™s degree or Ph.D. in Data Science, Statistics, Computer Science, Engineering, or a related field. Proven experience in predictive modeling and machine learning, particularly in the context of predictive maintenance. Strong programming skills in languages such as Python, R, or similar, with experience in relevant libraries (e.g., scikit-learn, TensorFlow, Keras). Familiarity with data visualization tools and techniques to effectively communicate complex data insights. Experience with big data technologies and frameworks (e.g., Hadoop, Spark) is a plus. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills, with the ability to convey technical concepts to non-technical stakeholders. Good to Have: Experience in Industrial software & Enterprise solutions Preferred Skills & Attributes: Strong understanding of modern software architectures and DevOps principles. Ability to analyze complex problems and develop effective solutions. Excellent communication and teamwork skills, with experience in cross-functional collaboration. Self-motivated and capable of working independently on complex projects. About the Team Become a part of our mission for sustainabilityclean energy for generations to come. We are a global team of diverse colleagues who share a passion for renewable energy and have a culture of trust and empowerment to make our own ideas a reality. We focus on personal and professional development to grow internally within our organization. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. We meet the growing energy demand across 90+ countries while ensuring our climate is protected. With more than 96,000 dedicated employees, we not only generate electricity for over 16% of the global community, but we"™re also using our technology to help protect people and the environment. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation.

Posted 2 months ago

Apply

4 - 9 years

14 - 18 Lacs

Noida

Work from Office

Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture We"™re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.

Posted 2 months ago

Apply

7 - 12 years

35 - 40 Lacs

Pune

Work from Office

About The Role : Job TitleBusiness Functional Analyst (Analytics) Corporate TitleVice President LocationPune, India Role Description ERM (Enterprise Risk Management) & MVRM (Market & Valuation Risk Management) IT group are part of Technology Data and Innovation and own and deliver on the RiskFinder platform to multiple stakeholders and sponsors. RiskFinder is the Banks Risk & Capital Management platform. It provides capability to calculate capital metrics, performs risk scenario analysis and portfolio risk analytics and related control functions across the Banks business lines.The system calculates over 600 billion scenarios per day on a high-performance compute grid, stores the results into a big data store and provides our end users the capability to aggregate, report and analyse the results. RiskFinder integrates distributed high performance grid compute and big data technologies to deliver the execution and analytics at very large scale required to process the volumes of scenarios within the timeframes required. The platform leverages in house quantitative analytics and inputs to our front office pricing models to deliver full revaluation-based capital metrics across a complex derivates portfolio. Our technology stack includes Java, C, C++, PostGres, OracleDB, Lua, Python, Scala, and Spark plus other off-the-shelf products like caching solutions integrated into one platform, which offers great opportunity for technical development and personal growth in a domain with focus on engineering and Agile delivery practices. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Develop a sound knowledge of the business requirements around market and credit risk calculations to be implemented in the strategic risk platform. Liaise with the key stakeholders to understand and document business requirements for the strategic risk platform Collaborate with business representatives, product leads to define optimal system solutions to meet business requirements Continuously improve data visualisation, dashboard and reporting capabilities Drive the breakdown and prioritization of the system deliverables across applications that make up the strategic risk analytical platform. Provide subject matter expertise to the development teams to convey business objectives of requirements and help make decisions on implementation specifics Your skills and experience Excellent business knowledge esp. Market and Counterparty Risk processes and methodologies, Regulatory RWA calculations and reporting, Derivatives pricing and risk management Strong Business analysis and problem-solving skills. Effective communication and presentation skills Exposure to software development lifecycle methodologies (waterfall, Agile etc) Data analysis, use of databases and data modelling. Working Knowledge of SQL, python, Pyspark or any similar tools for data analysis/drill down capability is MUST. Prior experience of leading a team by example would be highly beneficial Experience in product management, building product backlog, understanding and executing roadmap How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 months ago

Apply

7 - 9 years

5 - 9 Lacs

Bengaluru

Work from Office

Ecodel Infotel pvt ltd is looking for Big Data Developer to join our dynamic team and embark on a rewarding career journey. A Developer is responsible for designing, developing, and maintaining software applications and systems They collaborate with a team of software developers, designers, and stakeholders to create software solutions that meet the needs of the business Key responsibilities:Design, code, test, and debug software applications and systems Collaborate with cross-functional teams to identify and resolve software issuesWrite clean, efficient, and well-documented codeStay current with emerging technologies and industry trends Participate in code reviews to ensure code quality and adherence to coding standardsParticipate in the full software development life cycle, from requirement gathering to deploymentProvide technical support and troubleshooting for production issues Requirements:Strong programming skills in one or more programming languages, such as Python, Java, C++, or JavaScriptExperience with software development tools, such as version control systems (e g Git), integrated development environments (IDEs), and debugging toolsFamiliarity with software design patterns and best practicesGood communication and collaboration skills

Posted 2 months ago

Apply

5 - 8 years

25 - 30 Lacs

Hyderabad

Work from Office

Ecodel Infotel pvt ltd is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

6 - 8 years

3 - 7 Lacs

Gurugram

Work from Office

Skills: Bachelors degree / Master's Degree with high rankings from reputed colleges Preferably 6-8 years ETL /Data Analysis experience with a reputed firm Expertise in Big Data Managed Platform Environment like Databricks using Python/ PySpark/ SparkSQL Experience in handling large data volumes and orchestrating automated ETL/ data pipelines using CI/CD and Cloud Technologies. Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued Experience in Data modelling (e.g., database structure, entity relationships, UID etc.) , data profiling, data quality validation. Experience adopting software development best practices (e.g., modularization, testing, refactoring, etc.) Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools Excellent written and verbal communication skills in English Self-motivated with strong sense of problem-solving, ownership and action-oriented mindset Able to cope with pressure and demonstrate a reasonable level of flexibility/adaptability Track record of strong problem-solving, requirement gathering, and leading by example Able to work well within teams across continents/time zones with a collaborative mindset

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Key Responsibilities: Big Data Architecture: Design, develop, and maintain scalable and distributed data architectures capable of processing large volumes of data. Data Storage Solutions: Implement and optimize data storage solutions using technologies such as Hadoop , Spark , and PySpark . PySpark Development: Develop and implement efficient ETL processes using PySpark to extract, transform, and load large datasets. Performance Optimization: Optimize PySpark applications for better performance, scalability, and resource management. Qualifications: Proven experience as a Big Data Engineer with a strong focus on PySpark . Deep understanding of Big Data processing frameworks and technologies. Strong proficiency in PySpark for developing and optimizing ETL processes and data transformations. Experience with distributed computing and parallel processing. Ability to collaborate in a fast-paced, innovative environment. Required Skills Pyspark,Big Data, Python,

Posted 2 months ago

Apply

5 - 10 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Capital Markets Regulatory Compliance Good to have skills : MicroStrategy Business Intelligence, Capital Markets Audit, Microsoft Power Business Intelligence (BI) Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a member of the Business Acceptance Unit, you will participate in Business Acceptance Test for ECAG Regulatory Reporting and Analytics applications, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications align with regulatory compliance standards while fostering a productive and inclusive work environment. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Facilitate knowledge sharing sessions to enhance team capabilities. Monitor project progress and ensure timely delivery of application features. Create and maintain functional specifications for the Datawarehouse applications. Participate in discussions related to project planning, functionality, and review of functional specifications. Create test plan, test narratives, and define the test scope based on functional specifications and user stories. Develop manual test cases to test software changes in Datawarehouse applications. Create Feature Files in Gherkins format for test automation. Write python scripts for test automation. Create test data scenarios and execute test cases. Create and maintain regression test suite. Periodically report the test results and create test statistics. Follow-up on bugs identified and retest of software. Professional & Technical Skills: Must To Have Skills: Proficiency in Capital Markets Regulatory Compliance. Good To Have Skills: Experience with MicroStrategy Business Intelligence, Capital Markets Audit, Microsoft Power Business Intelligence (BI). Strong understanding of regulatory frameworks and compliance requirements in capital markets. Experience in application design and development methodologies. Ability to analyze complex regulatory requirements and translate them into actionable application features. Sound understanding of test methodology and agile software development methodology. Functional knowledge in derivatives and OTC clearing Experience in collaboration tools such as Jira and GitHub. Experience in testing Datawarehouse / reporting applications. Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets. Experience with distributed data/computing tools:Map/Reduce, Hadoop, Hive, Spark, etc. would be an advantage. Experience in visualizing/presenting data for stakeholders using:Zeppelin, Power BI, MicroStrategy will be an advantage. Additional Information: The candidate should have minimum 5 years of experience in Capital Markets Regulatory Compliance. This position is based at our Hyderabad office. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

3 - 8 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Capital Markets Regulatory Compliance Good to have skills : MicroStrategy Business Intelligence, Capital Markets Audit, Microsoft Power Business Intelligence (BI) Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, of the Business Acceptance Unit, you will participate in Business Acceptance Test for ECAG Regulatory Reporting and Analytics application you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the necessary compliance standards. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Your role will require you to stay updated on industry trends and best practices to enhance application performance and compliance. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Facilitate communication between technical teams and stakeholders to ensure alignment on project goals. Mentor junior team members, providing them with the necessary support and guidance to enhance their skills.Create and maintain functional specifications for the Datawarehouse applications. Participate in discussions related to project planning, functionality, and review of functional specifications. Create test plan, test narratives, and define the test scope based on functional specifications and user stories. Develop manual test cases to test software changes in Datawarehouse applications. Create Feature Files in Gherkins format for test automation. Write python scripts for test automation. Create test data scenarios and execute test cases. Create and maintain regression test suite. Periodically report the test results and create test statistics. Follow-up on bugs identified and retest of software. Professional & Technical Skills: Must To Have Skills: Proficiency in Capital Markets Regulatory Compliance. Good To Have Skills: Experience with MicroStrategy Business Intelligence, Capital Markets Audit, Microsoft Power Business Intelligence (BI). Strong understanding of regulatory frameworks and compliance requirements in capital markets. Experience in application design and development processes. Ability to analyze complex problems and develop effective solutions.Functional knowledge in derivatives and OTC clearing Experience in collaboration tools such as Jira and GitHub. Experience in testing Datawarehouse / reporting applications. Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets. Experience with distributed data/computing tools:Map/Reduce, Hadoop, Hive, Spark, etc. would be an advantage. Experience in visualizing/presenting data for stakeholders using:Zeppelin, Power BI, MicroStrategy will be an advantage. Additional Information: The candidate should have minimum 3 years of experience in Capital Markets Regulatory Compliance. This position is based at our Hyderabad office. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

7 - 12 years

13 - 18 Lacs

Hyderabad

Work from Office

Project Role : Quality Engineering Lead (Test Lead) Project Role Description : Leads a team of quality engineers through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Applies business and functional knowledge to develop end-to-end testing strategies through the use of quality processes and methodologies. Applies testing methodologies, principles and processes to define and implement key metrics to manage and assess the testing process including test execution and defect resolution. Must have skills : Capital Markets Regulatory Compliance Good to have skills : MicroStrategy Business Intelligence, Capital Markets Audit, Microsoft Power Business Intelligence (BI) Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, of the Business Acceptance Unit, you will participate in Business Acceptance Test for ECAG Regulatory Reporting and Analytics application you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the necessary compliance standards. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Your role will require you to stay updated on industry trends and best practices to enhance application performance and compliance. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Facilitate communication between technical teams and stakeholders to ensure alignment on project goals. Mentor junior team members, providing them with the necessary support and guidance to enhance their skills.Create and maintain functional specifications for the Datawarehouse applications. Participate in discussions related to project planning, functionality, and review of functional specifications. Create test plan, test narratives, and define the test scope based on functional specifications and user stories. Develop manual test cases to test software changes in Datawarehouse applications. Create Feature Files in Gherkins format for test automation. Write python scripts for test automation. Create test data scenarios and execute test cases. Create and maintain regression test suite. Periodically report the test results and create test statistics. Follow-up on bugs identified and retest of software. Professional & Technical Skills: Must To Have Skills: Proficiency in Capital Markets Regulatory Compliance. Good To Have Skills: Experience with MicroStrategy Business Intelligence, Capital Markets Audit, Microsoft Power Business Intelligence (BI). Strong understanding of regulatory frameworks and compliance requirements in capital markets. Experience in application design and development processes. Ability to analyze complex problems and develop effective solutions.Functional knowledge in derivatives and OTC clearing Experience in collaboration tools such as Jira and GitHub. Experience in testing Datawarehouse / reporting applications. Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets. Experience with distributed data/computing tools:Map/Reduce, Hadoop, Hive, Spark, etc. would be an advantage. Experience in visualizing/presenting data for stakeholders using:Zeppelin, Power BI, MicroStrategy will be an advantage. Additional Information: The candidate should have minimum 3 years of experience in Capital Markets Regulatory Compliance. This position is based at our Hyderabad office. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Hyderabad

Work from Office

Senior Big Data Engineer Experience: 7-9 Years of Experience. Preferred location: Hyderabad Must have Skills: Bigdata, AWS cloud, Java/Scala/Python, Ci/CD Good to Have Skills: Relational Databases (any), No SQL databases (any), Microservices or Domain services or API gateways or similar, Containers (Docker, K8s, etc) Required Skills Big Data,Aws Cloud,CI/CD,Java/Scala/Python

Posted 2 months ago

Apply

2 - 5 years

6 - 10 Lacs

Gurugram

Work from Office

KDataScience (USA & INDIA) is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

7 - 12 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, collaborating with team members, and making key decisions to ensure project success. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process effectively Ensure timely delivery of projects Provide guidance and mentorship to team members Professional & Technical Skills: Must To Have Skills: Proficiency in PySpark Strong understanding of big data processing Experience with data manipulation and transformation Hands-on experience in building scalable applications Knowledge of cloud platforms and services Additional Information: The candidate should have a minimum of 7.5 years of experience in PySpark This position is based at our Bengaluru office A 15 years full-time education is required Qualification 15 years full time education

Posted 2 months ago

Apply

0 years

0 - 0 Lacs

Chennai, Tamil Nadu

Work from Office

Big-Data Administrator :- a) Full time B.Tech / B.E / MCA / MS(IT) from a recognized Institution / University b) Experience - 3+ d) Preferably - Relevant Cloudera Certifications. Experience of managing and administrating Hadoop systems (Cloudera) like Cluster management, System administration, security and data management, trouble shooting, monitoring and support. Experience in node-based data processing, dealing with large amount of data. Experience of Hadoop, HIVE, MapReduce, HBase, Java etc. Job Types: Full-time, Permanent Pay: ₹10,552.72 - ₹69,025.33 per month Schedule: Day shift Morning shift Work Location: In person Application Deadline: 29/05/2025

Posted 2 months ago

Apply

5 - 8 years

0 Lacs

Chennai, Tamil Nadu, India

Hybrid

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Core Java, Spring Boot, Java2/EE,Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None

Posted 2 months ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Overview Viraaj HR Solutions is dedicated to delivering top-tier HR services and talent acquisition strategies to help companies throughout India thrive. Our mission is to connect skilled professionals with excellent opportunities, fostering a culture of collaboration, integrity, and innovation. We pride ourselves on understanding the unique needs of both our clients and candidates, ensuring a perfect fit for every role. At Viraaj HR Solutions, we prioritize our people's growth and development, making us a dynamic and rewarding workplace. Role Responsibilities Design and implement scalable Big Data solutions using Hadoop technologies. Develop and maintain ETL processes to manage and process large data sets. Collaborate with data architects and analysts to gather requirements and deliver solutions. Optimize existing Hadoop applications for maximizing efficiency and performance. Write, test, and maintain complex SQL queries to extract and manipulate data. Implement data models and strategies that accommodate large-scale data processing. Conduct data profiling and analysis to ensure data integrity and accuracy. Utilize MapReduce frameworks to execute data processing tasks. Work closely with data scientists to facilitate exploratory data analysis. Ensure compliance with data governance and privacy regulations. Participate in code reviews and maintain documentation for all development processes. Troubleshoot and resolve performance bottlenecks and other technical issues. Stay current with technology trends and tools in Big Data and Cloud platforms. Train junior developers and assist in their professional development. Contribute to team meeting discussions regarding project status and ideas for improvement. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. Proven experience as a Big Data Developer or similar role. Strong foundational knowledge of the Hadoop ecosystem (HDFS, Hive, Pig, etc.). Proficient in programming languages such as Java, Python, or Scala. Experience with database management systems (SQL and NoSQL). Familiar with data processing frameworks like Apache Spark. Understanding of data pipeline architectures and data integration techniques. Knowledge of cloud computing services (AWS, Azure, or Google Cloud). Exceptional problem-solving skills and attention to detail. Strong communication skills and ability to work in a team environment. Experience with data visualization tools (Tableau, Power BI, etc.) is a plus. Ability to work under pressure and meet tight deadlines. Adaptability to new technologies and platforms as they emerge. Certifications in Big Data technologies would be an advantage. Willingness to learn and grow in the field of data sciences. Skills: data visualization,spark,java,cloud computing (aws, azure, google cloud),sql proficiency,apache spark,data warehousing,data visualization tools,nosql,scala,python,etl,sql,hadoop,mapreduce,big data analytics,big data Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for the design, development, and maintenance of a data and analytics platform. Effectively and efficiently processes, stores, and makes data available to analysts and other consumers. Works with key business stakeholders, IT experts, and subject-matter experts to plan, design, and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Though the role category is generally listed as Remote, this specific position is designated as Hybrid. Key Responsibilities Business Alignment & Collaboration – Partner with the Product Owner to align data solutions with strategic goals and business requirements. Data Pipeline Development & Management – Design, develop, test, and deploy scalable data pipelines for efficient data transport into Cummins Digital Core (Azure DataLake, Snowflake) from various sources (ERP, CRM, relational, event-based, unstructured). Architecture & Standardization – Ensure compliance with AAI Digital Core and AAI Solutions Architecture standards for data pipeline design and implementation. Automation & Optimization – Design and automate distributed data ingestion and transformation systems, integrating ETL/ELT tools and scripting languages to ensure scalability, efficiency, and quality. Data Quality & Governance – Implement data governance processes, including metadata management, access control, and retention policies, while continuously monitoring and troubleshooting data integrity issues. Performance & Storage Optimization – Develop and implement physical data models, optimize database performance (indexing, table relationships), and operate large-scale distributed/cloud-based storage solutions (Data Lakes, Hadoop, HBase, Cassandra, MongoDB, Accumulo, DynamoDB). Innovation & Tool Evaluation – Conduct proof-of-concept (POC) initiatives, evaluate new data tools, and provide recommendations for improvements in data management and integration. Documentation & Best Practices – Maintain standard operating procedures (SOPs) and data engineering documentation to support consistency and efficiency. Agile Development & Automation – Use Agile methodologies (DevOps, Scrum, Kanban) to drive automation in data integration, preparation, and infrastructure management, reducing manual effort and errors. Coaching & Team Development – Provide guidance and mentorship to junior team members, fostering skill development and knowledge sharing. Responsibilities Competencies: System Requirements Engineering: Translates stakeholder needs into verifiable requirements, tracks status, and assesses impact changes. Collaborates: Builds partnerships and works collaboratively with others to meet shared objectives. Communicates Effectively: Delivers multi-mode communications tailored to different audiences. Customer Focus: Builds strong customer relationships and provides customer-centric solutions. Decision Quality: Makes good and timely decisions that drive the organization forward. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Develops, tests, and maintains code using industry standards, version control, and automation tools. Quality Assurance Metrics: Measures and assesses solution effectiveness using IT Operating Model (ITOM) standards. Solution Documentation: Documents knowledge gained and communicates solutions for improved productivity. Solution Validation Testing: Validates configurations and solutions to meet customer requirements using SDLC best practices. Data Quality: Identifies, corrects, and manages data flaws to support effective governance and decision-making. Problem Solving: Uses systematic analysis to determine root causes and implement robust solutions. Values Differences: Recognizes and leverages the value of diverse perspectives and cultures. Education, Licenses, Certifications Bachelor's degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Qualifications Preferred Experience: Technical Expertise – Intermediate experience in data engineering with hands-on knowledge of SPARK, Scala/Java, MapReduce, Hive, HBase, Kafka, and SQL. Big Data & Cloud Solutions – Proven ability to design and develop Big Data platforms, manage large datasets, and implement clustered compute solutions in cloud environments. Data Processing & Movement – Experience developing applications requiring large-scale file movement and utilizing various data extraction tools in cloud-based environments. Business & Industry Knowledge – Familiarity with analyzing complex business systems, industry requirements, and data regulations to ensure compliance and efficiency. Analytical & IoT Solutions – Experience building analytical solutions with exposure to IoT technology and its integration into data engineering processes. Agile Development – Strong understanding of Agile methodologies, including Scrum and Kanban, for iterative development and deployment. Technology Trends – Awareness of emerging technologies and trends in data engineering, with a proactive approach to innovation and continuous learning. Technical Skills Programming Languages: Proficiency in Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Hands-on experience with Hadoop, Spark, Kafka, and similar frameworks. Cloud Services: Experience with Azure, Databricks, and AWS platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus. API Integration: Experience working with APIs to consume data from ERP and CRM systems. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2410681 Relocation Package No Show more Show less

Posted 2 months ago

Apply

2 - 5 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TCS is Hiring !!! Location :Bangalore, Chennai, Kolkata, Hyderabad, Pune Exp : 4-8 Yrs Functional Skills: Experience in Credit Risk/Regulatory risk domainTechnical Skills: Spark ,PySpark, Python, Hive, Scala, MapReduce, Unix shell scripting Good to Have Skills: Exposure to Machine Learning Techniques Job Description: 4+ Years of experience with Developing/Fine tuning and implementing programs/applicationsUsing Python/PySpark/Scala on Big Data/Hadoop Platform. Roles and Responsibilities: a) Work with a Leading Bank’s Risk Management team on specific projects/requirements pertaining to risk Models inconsumer and wholesale bankingb) Enhance Machine Learning Models using PySparkc) Work with Data Scientists to Build ML Models based on Business Requirements and Follow ML Cycle to Deploy them allthe way to Production Environmentd) Participate Feature Engineering, Training Models, Scoring and retraininge) Architect Data Pipeline and Automate Data Ingestion and Model Jobs Skills and competencies:Required:· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performanceData and macro-economic data to solve business problems.· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes inCredit Risk/Banking· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.· Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.· Experience in systems integration, web services, batch processing· Experience in migrating codes to PySpark/Scala is big Plus· The ability to act as liaison conveying information needs of the business to IT and data constraints to the businessapplies equal conveyance regarding business strategy and IT strategy, business processes and work flow· Flexibility in approach and thought process· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

Posted 2 months ago

Apply

5 - 10 years

15 - 30 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Skills: Mandatory: SQL, Python, Databricks, Spark / Pyspark. Good to have: MongoDB, Dataiku DSS, Databricks Exp in data processing using Python/scala Advanced working SQL knowledge, expertise using relational databases Need Early joiners. Required Candidate profile ETL development tools like databricks/airflow/snowflake. Expert in building and optimizing big data' data pipelines, architectures, and data sets. Proficient in Big data tools and ecosystem

Posted 2 months ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Hybrid

About Company: Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. · Job Title: ETL Testing + Java + API Testing + UNIX Commands · Location: Pune [Kharadi] (Hybrid) · Experience: 7 + yrs [With 7+ Relevant Experience in ETL Testing] · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills : ETL Testing, Java, API Testing, UINX Commands Job Description: 1) Key Responsibilities: ! Extensive experience in validating ETL processes, ensuring accurate data extraction, transformation, and loading across multiple environments. ! Proficient in Java programming, with the ability to understand and write Java code when required. ! Advanced skills in SQL for data validation, querying databases, and ensuring data consistency and integrity throughout the ETL process. ! Expertise in utilizing Unix commands to manage test environments, handle file systems, and execute system-level tasks. ! Proficient in creating shell scripts to automate testing processes, enhancing productivity and reducing manual intervention. ! Ensuring that data transformations and loads are accurate, with strong attention to identifying and resolving discrepancies in the ETL process. ! Focused on automating repetitive tasks and optimizing testing workflows to increase overall testing efficiency. ! Write and execute automated test scripts using Java to ensure the quality and functionality of ETL solutions. ! Utilize Unix commands and shell scripting to automate repetitive tasks and manage system processes. ! Collaborate with cross-functional teams, including data engineers, developers, and business analysts, to ensure the ETL processes meet business requirements. ! Ensure that data transformations, integrations, and pipelines are robust, secure, and efficient. ! Troubleshoot data discrepancies and perform root cause analysis for failed data loads. ! Create comprehensive test cases, execute them, and document test results for all data flows. ! Actively participate in the continuous improvement of ETL testing processes and methodologies. ! Experience with version control systems (e.g., Git) and integrating testing into CI/CD pipelines. 2) Tools & Technologies (Good to Have): # Experience with Hadoop ecosystem tools such as HDFS, MapReduce, Hive, and Spark for handling large-scale data processing and storage. # Knowledge of NiFi for automating data flows, transforming data, and integrating different systems seamlessly. # Experience with tools like Postman, SoapUI, or RestAssured to validate REST and SOAP APIs, ensuring correct data exchange and handling of errors.

Posted 2 months ago

Apply

7 - 12 years

50 - 75 Lacs

Bengaluru

Work from Office

---- What the Candidate Will Do ---- Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization ---- Basic Qualifications ---- 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto) Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders ---- Preferred Qualifications ---- Deep understanding of data warehousing concepts and data modeling best practices Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto) Familiarity with streaming technologies such as Kafka or Samza Expertise in performance optimization, query tuning, and resource-efficient data processing Strong problem-solving skills and a track record of owning systems from design to production

Posted 2 months ago

Apply

4 - 9 years

3 - 7 Lacs

Hyderabad

Work from Office

Data Engineer Summary Apply Now Full-Time 4+ years Responsibilities Design, develop, and maintain data pipelines and ETL processes. Build and optimize data architectures for analytics and reporting. Collaborate with data scientists and analysts to support data-driven initiatives. Implement data security and governance best practices. Monitor and troubleshoot data infrastructure and ensure high availability. Qualifications Design, develop, and maintain data pipelines and ETL processes. Build and optimize data architectures for analytics and reporting. Collaborate with data scientists and analysts to support data-driven initiatives. Implement data security and governance best practices. Monitor and troubleshoot data infrastructure and ensure high availability. Skills Proficiency in data engineering tools (Hadoop, Spark, Kafka, etc.). Strong SQL and programming skills (Python, Java, etc.). Experience with cloud platforms (AWS, Azure, GCP). Knowledge of data modeling, warehousing, and ETL processes. Strong problem-solving and analytical abilities.

Posted 2 months ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Hybrid

Key Result Areas And Activities ETL Pipeline Development and Maintenance Design, develop, and maintain ETL pipelines using Cloudera tools such as Apache NiFi, Apache Flume, and Apache Spark. Create and maintain comprehensive documentation for data pipelines, configurations, and processes. Data Integration and Processing Integrate and process data from diverse sources including relational databases, NoSQL databases, and external APIs. Performance Optimization Optimize performance and scalability of Hadoop components (HDFS, YARN, MapReduce, Hive, Spark) to ensure efficient data processing. Identify and resolve issues related to data pipelines, system performance, and data integrity. Data Quality and Transformation Implement data quality checks and manage data transformation processes to ensure accuracy and consistency. Data Security and Compliance Apply data security measures and ensure compliance with data governance policies and regulatory requirements. Essential Skills Proficiency in Cloudera Data Platform (CDP) - Cloudera Data Engineering. Proven track record of successful data lake implementations and pipeline development. Knowledge of data lakehouse architectures and their implementation. Hands-on experience with Apache Spark and Apache Airflow within the Cloudera ecosystem. Proficiency in programming languages such as Python, Java, Scala, and Shell. Exposure to containerization technologies (e.g., Docker, Kubernetes) and system-level understanding of data structures, algorithms, distributed storage, and compute. Desirable Skills Experience with other CDP services like Dataflow, Stream Processing Familiarity with cloud environments such as AWS, Azure, or Google Cloud Platform Understanding of data governance and data quality principles CCP Data Engineer Certified Qualifications 7+ years of experience in Cloudera/Hadoop/Big Data engineering or related roles Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field Qualities Can influence and implement change; demonstrates confidence, strength of conviction and sound decisions. Believes in head-on dealing with a problem; approaches in logical and systematic manner; is persistent and patient; can independently tackle the problem, is not over-critical of the factors that led to a problem and is practical about it; follow up with developers on related issues. Able to consult, write, and present persuasively. Able to work in a self-organized and cross-functional team. Able to iterate based on new information, peer reviews, and feedback. Able to work seamlessly with clients across multiple geographies. Research focused mindset. Proficiency in English (read/write/speak) and communication over email. Excellent analytical, presentation, reporting, documentation, and interactive skills.

Posted 2 months ago

Apply

5 - 8 years

8 - 30 Lacs

Pune, Maharashtra, India

On-site

Company Overview Viraaj HR Solutions is dedicated to delivering top-tier HR services and talent acquisition strategies to help companies throughout India thrive. Our mission is to connect skilled professionals with excellent opportunities, fostering a culture of collaboration, integrity, and innovation. We pride ourselves on understanding the unique needs of both our clients and candidates, ensuring a perfect fit for every role. At Viraaj HR Solutions, we prioritize our people's growth and development, making us a dynamic and rewarding workplace. Role Responsibilities Design and implement scalable Big Data solutions using Hadoop technologies.Develop and maintain ETL processes to manage and process large data sets.Collaborate with data architects and analysts to gather requirements and deliver solutions.Optimize existing Hadoop applications for maximizing efficiency and performance.Write, test, and maintain complex SQL queries to extract and manipulate data.Implement data models and strategies that accommodate large-scale data processing.Conduct data profiling and analysis to ensure data integrity and accuracy.Utilize MapReduce frameworks to execute data processing tasks.Work closely with data scientists to facilitate exploratory data analysis.Ensure compliance with data governance and privacy regulations.Participate in code reviews and maintain documentation for all development processes.Troubleshoot and resolve performance bottlenecks and other technical issues.Stay current with technology trends and tools in Big Data and Cloud platforms.Train junior developers and assist in their professional development.Contribute to team meeting discussions regarding project status and ideas for improvement. Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field.Proven experience as a Big Data Developer or similar role.Strong foundational knowledge of the Hadoop ecosystem (HDFS, Hive, Pig, etc.).Proficient in programming languages such as Java, Python, or Scala.Experience with database management systems (SQL and NoSQL).Familiar with data processing frameworks like Apache Spark.Understanding of data pipeline architectures and data integration techniques.Knowledge of cloud computing services (AWS, Azure, or Google Cloud).Exceptional problem-solving skills and attention to detail.Strong communication skills and ability to work in a team environment.Experience with data visualization tools (Tableau, Power BI, etc.) is a plus.Ability to work under pressure and meet tight deadlines.Adaptability to new technologies and platforms as they emerge.Certifications in Big Data technologies would be an advantage.Willingness to learn and grow in the field of data sciences. Skills: data visualization,spark,java,cloud computing (aws, azure, google cloud),sql proficiency,apache spark,data warehousing,data visualization tools,nosql,scala,python,etl,sql,hadoop,mapreduce,big data analytics,big data

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies