Jobs
Interviews

623 Mapreduce Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

6 - 10 Lacs

Hyderabad

Work from Office

As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Kenvue is currently recruiting for a: CDP Developer What we do At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Who We Are Our global team is ~ 22,000 brilliant people with a workplace culture where every voice matters, and every contribution is appreciated. We are passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. Role reports to: Manager Digital Engagement Location: Asia Pacific, India, Karnataka, Bangalore Work Location: Hybrid What you will do Who we are At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, we’re the house of iconic brands - including NEUTROGENA®, AVEENO®, TYLENOL®, LISTERINE®, JOHNSON’S® and BAND-AID® that you already know and love. Science is our passion; care is our talent. Our global team is made by 22,000 diverse and brilliant people, passionate about insights, innovation and committed to deliver the best products to our customers. With expertise and empathy, being a Kenvuer means to have the power to impact life of millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage – and have brilliant opportunities waiting for you! Join us in shaping our future–and yours. What you will do The Global MarTech organization is seeking for a strong Developer to join our Customer Data Platform team to build and maintain data orchestration pipelines. This individual requires to use specialized knowledge and skills, problem solving techniques, challenge the status quo, deliver innovation, and resolves how to best support the business through the effective use of technology. Primary responsibilities include operating as part of the Agile team, working alongside the Product Owner to understand the requirements, and translate these to a set of solution. Key Responsibilities Build scalable and efficient data pipelines and ETL processes to support business needs Collaborate with cross-functional teams in architecture design, scoping, prioritization of technical requirements and solving complex data-related problems. Stay up-to-date with emerging trends and technologies in data engineering What we are looking for Required Qualifications 3+ years experience in data engineering with working knowledge of SQL and data-oriented solutions (DWH, ETL, etc). Bachelor's degree or equivalent in Computer Science, Engineering, or related field Specialized knowledge on ELT development using DBT (Data build tool) and RDBMS (MySQL, SQL Server, Oracle, PostgreSQL) with SQL and Python programming. Familiar with team projects and collaboration using version control tools such as GitHub or GitLab. Proficient verbal and written skills in English is required. Desired Qualifications Strong problem-solving and analytical skills Excellent communication and collaboration skills. Experience with big data technologies (Hadoop, Spark, Kafka, MapReduce, Hive/Pig, Cassandra, MongoDB, etc.). Experience with cloud-based or SaaS products and familiarity with digital marketing and marketing technology, e.g. TreasureData, Acquia, Salesforce Data Cloud, Adobe AEP, Tealium AudienceStream, Segment, ActionIQ, etc. Strong working experience using English If you are an individual with a disability, please check our Disability Assistance page for information on how to request an accommodation.

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

The Staff ML Scientist position at Visa offers a unique opportunity to engage in cutting-edge Applied AI research within the realm of data analytics. As a key member of the team, you will play a pivotal role in driving Visa's strategic vision as a leading data-driven company. Your responsibilities will involve formulating complex business problems into technical data challenges, collaborating closely with product stakeholders to ensure the practicality of solutions, and delivering impactful prototypes and production code. You will have the chance to experiment with various datasets, both in-house and third-party, to evaluate their relevance to business objectives. Moreover, your role will encompass building data transformations for structured and unstructured data, exploring and refining modeling and scoring algorithms, and implementing methods for adaptive learning and model validation. Your expertise in automation and predictive analytics will be instrumental in enhancing operational efficiency and performance monitoring. In addition to your technical skills, you will be expected to possess a strong academic background and exceptional software engineering capabilities. A proactive and detail-oriented approach, coupled with excellent collaboration skills, will be essential for success in this role. This is a hybrid position, allowing for a flexible work arrangement that combines remote work and office presence. The expectation is to work from the office 2-3 set days per week, with a general guideline of being in the office at least 50% of the time based on business requirements. Qualifications: - 8 or more years of work experience with a Bachelors Degree or an Advanced Degree - Proficiency in modeling techniques such as logistic regression, Nave Bayes, SVM, decision trees, or neural networks - Ability to program in scripting languages like Perl or Python, and programming languages such as Java, C++, or C# - Familiarity with statistical tools like SAS, R, KNIME, and experience with deep learning frameworks like TensorFlow - Knowledge of Natural Language Processing and working with large datasets using tools such as Hadoop, MapReduce, Pig, or Hive - Publications or presentations in recognized Machine Learning and Data Mining journals/conferences would be advantageous Join Visa as a Staff ML Scientist and contribute to pioneering advancements in Applied AI research that drive innovation and shape the future of data analytics.,

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 7 Lacs

Cochin

Remote

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a seasoned and strategic-thinking Senior AWS DataOps Engineer to join our growing global data team. In this role, you will take ownership of critical data workflows and work closely with cross-functional teams to support, optimize, and scale cloud-based data pipelines. You will bring leadership to data operations, contribute to architectural decisions, and help ensure the integrity, availability, and performance of our AWS data infrastructure. Your key responsibilities Lead the design, monitoring, and optimization of AWS-based data pipelines using services like AWS Glue, EMR, Lambda, and Amazon S3. Oversee and enhance complex ETL workflows involving IICS (Informatica Intelligent Cloud Services), Databricks, and native AWS tools. Collaborate with data engineering and analytics teams to streamline ingestion into Amazon Redshift and lead data validation strategies. Manage job orchestration using Apache Airflow, AWS Data Pipeline, or equivalent tools, ensuring SLA adherence. Guide SQL query optimization across Redshift and other AWS databases for analytics and operational use cases. Perform root cause analysis of critical failures, mentor junior staff on best practices, and implement preventive measures. Lead deployment activities through robust CI/CD pipelines, applying DevOps principles and automation. Own the creation and governance of SOPs, runbooks, and technical documentation for data operations. Partner with vendors, security, and infrastructure teams to ensure compliance, scalability, and cost-effective architecture. Skills and attributes for success Expertise in AWS data services and ability to lead architectural discussions. Analytical thinker with the ability to design and optimize end-to-end data workflows. Excellent debugging and incident resolution skills in large-scale data environments. Strong leadership and mentoring capabilities, with clear communication across business and technical teams. A growth mindset with a passion for building reliable, scalable data systems. Proven ability to manage priorities and navigate ambiguity in a fast-paced environment. To qualify for the role, you must have 5–8 years of experience in DataOps, Data Engineering, or related roles. Strong hands-on expertise in Databricks. Deep understanding of ETL pipelines and modern data integration patterns. Proven experience with Amazon S3, EMR, Glue, Lambda, and Amazon Redshift in production environments. Experience in Airflow or AWS Data Pipeline for orchestration and scheduling. Advanced knowledge of IICS or similar ETL tools for data transformation and automation. SQL skills with emphasis on performance tuning, complex joins, and window functions. Technologies and Tools Must haves Proficient in Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda Expert in Databricks – ability to develop, optimize, and troubleshoot advanced notebooks Strong experience with Amazon Redshift for scalable data warehousing and analytics Solid understanding of orchestration tools like Apache Airflow or AWS Data Pipeline Hands-on with IICS (Informatica Intelligent Cloud Services) or comparable ETL platforms Good to have Exposure to Power BI or Tableau for data visualization Familiarity with CDI, Informatica, or other enterprise-grade data integration platforms Understanding of DevOps and CI/CD automation tools for data engineering workflows SQL familiarity across large datasets and distributed databases What we look for Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: • Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. • Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. • Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. • Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. • Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. • Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. • Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. • Key Skills & Technology Areas: • AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. • LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. • Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. • Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. • Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. • Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) • Observability Integration: Splunk, ELK Stack, Prometheus. • DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. • Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: • Proven ability to work independently and collaboratively in agile, innovation-driven teams. • Strong problem-solving mindset and product-oriented thinking. • Excellent communication and technical storytelling skills. • Flexibility to work across multiple opportunities based on business priorities. • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Solutions Architect with over 7 years of experience, you will have the opportunity to leverage your expertise in cloud data solutions to architect scalable and modern solutions on AWS. In this role at Quantiphi, you will be a key member of our high-impact engineering teams, working closely with clients to solve complex data challenges and design cutting-edge data analytics solutions. Your responsibilities will include acting as a trusted advisor to clients, leading discovery/design workshops with global customers, and collaborating with AWS subject matter experts to develop compelling proposals and Statements of Work (SOWs). You will also represent Quantiphi in various forums such as tech talks, webinars, and client presentations, providing strategic insights and solutioning support during pre-sales activities. To excel in this role, you should have a strong background in AWS Data Services including DMS, SCT, Redshift, Glue, Lambda, EMR, and Kinesis. Your experience in data migration and modernization, particularly with Oracle, Teradata, and Netezza to AWS, will be crucial. Hands-on experience with ETL tools such as SSIS, Informatica, and Talend, as well as a solid understanding of OLTP/OLAP, Star & Snowflake schemas, and data modeling methodologies, are essential for success in this position. Additionally, familiarity with backend development using Python, APIs, and stream processing technologies like Kafka, along with knowledge of distributed computing concepts including Hadoop and MapReduce, will be beneficial. A DevOps mindset with experience in CI/CD practices and Infrastructure as Code is also desired. Joining Quantiphi as a Solutions Architect is more than just a job it's an opportunity to shape digital transformation journeys and influence business strategies across various industries. If you are a cloud data enthusiast looking to make a significant impact in the field of data analytics, this role is perfect for you.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a skilled Senior Engineer at Impetus Technologies, you will utilize your expertise in Java and Big Data technologies to design, develop, and deploy scalable data processing applications. Your responsibilities will include collaborating with cross-functional teams, developing high-quality code, and optimizing data processing workflows. Additionally, you will mentor junior engineers and contribute to architectural decisions to enhance system performance and scalability. Key Responsibilities: - Design, develop, and maintain high-performance applications using Java and Big Data technologies. - Implement data ingestion and processing workflows with frameworks like Hadoop and Spark. - Collaborate with the data architecture team to define efficient data models. - Optimize existing applications for performance, scalability, and reliability. - Mentor junior engineers, provide technical leadership, and promote continuous improvement. - Participate in code reviews and ensure best practices for coding, testing, and documentation. - Stay up-to-date with technology trends in Java and Big Data, and evaluate new tools and methodologies. Skills and Tools Required: - Strong proficiency in Java programming for building complex applications. - Hands-on experience with Big Data technologies like Apache Hadoop, Apache Spark, and Apache Kafka. - Understanding of distributed computing concepts and technologies. - Experience with data processing frameworks and libraries such as MapReduce and Spark SQL. - Familiarity with database systems like HDFS, NoSQL databases (e.g., Cassandra, MongoDB), and SQL databases. - Strong problem-solving skills and the ability to troubleshoot complex issues. - Knowledge of version control systems like Git and familiarity with CI/CD pipelines. - Excellent communication and teamwork skills for effective collaboration. About the Role: You will be responsible for designing and developing scalable Java applications for Big Data processing, collaborating with cross-functional teams to implement innovative solutions, and ensuring code quality and performance through best practices and testing methodologies. About the Team: You will work with a diverse team of skilled engineers, data scientists, and product managers in a collaborative environment that encourages knowledge sharing and continuous learning. Technical workshops and brainstorming sessions will provide opportunities to enhance your skills and stay updated with industry trends. Responsibilities: - Developing and maintaining high-performance Java applications for efficient data processing. - Implementing data integration and processing frameworks using Big Data technologies. - Troubleshooting and optimizing systems to enhance performance and scalability. To succeed in this role, you should have: - Strong proficiency in Java and experience with Big Data technologies and frameworks. - Solid understanding of data structures, algorithms, and software design principles. - Excellent problem-solving skills and the ability to work independently and within a team. - Familiarity with cloud platforms and distributed computing concepts is a plus. Qualification: Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience: 7 to 10 years Job Reference Number: 13131,

Posted 2 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: Scala programming. Experience: 5-8 Years.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Long Description Experienceand Expertise inany of the followingLanguagesat least 1 of them : Java, Scala, Python Experienceand expertise in SPARKArchitecture Experience in the range of 6-10 yrs plus Good Problem SolvingandAnalytical Skills Ability to Comprehend the Business requirementand translate to the Technical requirements Good communicationand collaborative skills with fellow teamandacross Vendors Familiar with development of life cycle includingCI/CD pipelines. Proven experienceand interested in supportingexistingstrategicapplications Familiarity workingwithagile methodology Mandatory Skills: Scala programming.: Experience: 5-8 Years.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

As a Senior Big Data Platform Engineer at Incedo, you will be responsible for designing and implementing big data platforms to support large-scale data integration projects. You will work with data architects and data engineers to define the platform architecture and build the necessary infrastructure. You will be skilled in big data technologies such as Hadoop, Spark, and Kafka and have experience in cloud computing platforms such as AWS or Azure. You will be responsible for ensuring the performance, scalability, and security of the big data platform and troubleshooting any issues that arise. Roles & Responsibilities: Designing, developing and maintaining large-scale big data platforms using technologies like Hadoop, Spark and Kafka Creating and managing data warehouses, data lakes and data marts Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Troubleshooting and resolving big data platform issues Collaborating with other teams to ensure the consistency and integrity of data Technical Skills Skills Requirements: Experience with big data processing technologies such as Apache Hadoop, Apache Spark, or Apache Kafka. Understanding of distributed computing concepts such as MapReduce, Spark RDDs, or Apache Flink data streams. Familiarity with big data storage solutions such as HDFS, Amazon S3, or Azure Data Lake Storage. Knowledge of big data processing frameworks such as Apache Hive, Apache Pig, or Apache Impala. Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 2 weeks ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Noida

Work from Office

Data Engineering- Technical Lead Paytm is Indias leading digital payments and financial services company, which is focused on driving consumers and merchants to its platform by offering them a variety of payment use cases. Paytm provides consumers with services like utility payments and money transfers, while empowering them to pay via Paytm Payment Instruments (PPI) like Paytm Wallet, Paytm UPI, Paytm Payments Bank Netbanking, Paytm FASTag and Paytm Postpaid - Buy Now, Pay Later. To merchants, Paytm offers acquiring devices like Soundbox, EDC, QR and Payment Gateway where payment aggregation is done through PPI and also other banks financial instruments. To further enhance merchants business, Paytm offers merchants commerce services through advertising and Paytm Mini app store.Operating on this platform leverage, the company then offers credit services such as merchant loans, personal loans and BNPL, sourced by its financial partners. About the Role: This position requiressomeone to work on complex technical projects and closely work with peers in an innovative andfast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Requirements: Minimum 6+ years of experience in Big Data technologies. The position Grow our analytics capabilities with faster, more reliabletools, handling petabytes ofdataevery day. Brainstorm and create new platforms that can help in our quest to makeavailable to cluster users in all shapes and forms, with low latency and horizontalscalability. Make changes to ourdiagnosing any problems across the entire technical stack. Design and develop a real-time events pipeline forDataingestion for real-time dash-boarding.Develop complex and efficient functions to transform rawdatasources into powerful,reliable components of ourdatalake. Design & implement new components and various emerging technologies in HadoopEco- System, and successful execution of various projects. Be a brand ambassador for Paytm- Stay Hungry, Stay Humble, Stay Relevant! Skills that will help you succeed in this role: Fluent withStrong hands-on experience with Hadoop, MapReduce, Hive, Spark, PySpark etc.Excellent programming/debugging skills in Python/Java/Scala. Experience with any scripting language such as Python, Bash etc. Good to have experience of working with noSQL databases like Hbase, Cassandra.Hands-on programming experience with multithreaded applications.Good to have experience in Database, SQL, messaging queues like Kafka. Good to have experience in developing streaming applications e.g. Spark Streaming,Flink, Storm, etc.Good to have experience with AWS and cloud technologies such as S3 Experience with caching architectures like Redis etc. Why join us: Because you get an opportunity to make a difference, and have a great time doing that.You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve.You should work with us if you think seriously about what technology can do for people.We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants- and we are committed to it. Indias largest digital lending story is brewing here. Its your opportunity to be a part of the story!

Posted 2 weeks ago

Apply

5.0 - 8.0 years

4 - 7 Lacs

Mumbai

Work from Office

Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala and PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Plus. Mandatory Skills: PySpark. Experience: 5-8 Years.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp -8+years Domain - Pharmaceutical Location - Pan India Overview We are looking for a Deputy Manager/Group Manager in Advanced Analytics for the Lifesciences/Pharma domain. The person will lead a dynamic team focused on assisting clients in Marketing, Sales, and Operations through advanced data analytics. Proficiency in ML & DL Algorithms, NLP, Generative AI, Omni Channel Aalytics and Python/R/SAS is essential. Roles and Responsibilities summary: Partner with the Clients’ Advanced Analytics team to identify, scope, and execute advanced analytics efforts that answer business questions, solve business needs, and add business value. Examples include estimating marketing channel effectiveness or estimating sales force sizing. Maintain a broad understanding of pharmaceutical sales, marketing and operations and develop analytical solutions in these areas. Stay current with respect to statistical/mathematical/informatics modeling methodology, to maintain proficiency in applying new and varied methods, and to be competent in justifying methods selected. POC development for building internal capabilities and standardization of common modeling processes. Lead & guide the team independently or with little support to implement & deliver complex project assignments. Provide strategic leadership to the team by building new capabilities within the group and identifying business opportunities. Provide thought leadership by contributing to whitepapers and articles at the BU and organization level. Developing and delivering formal presentations to senior clients in both delivery and sales situations Additional Information: Interpersonal communication skills for effective customer consultation Teamwork and leadership skills Self-management skills with a focus on results for timely and accurate completion of competing deliverables Make the impossible possible in quest to make life better. Bring Analytics to life by giving it zeal and making it applicable to business. Know, learn, and keep up-to-date on the statistical and scientific advances to maximize your impact. Bring an insatiable desire to learn, to innovate, and to challenge yourself for the benefit of patients. Technical Skills: Proficient in Python or R for statistical and machine learning applications. Expertise in a wide range of techniques, including Regression, Classification Decision Trees, Text Mining, Natural Language Processing, Bayesian Models, and more. Build & train neural network architectures such as CNN, RNN, LSTMs, Transformers. Experience in Omni Channel Analytics for predicting the Nest Best Action using the Advanced ML/DL/RL algorithms and Pharma CRM data Hands-on experience in NLP & NLG, covering topic modeling, Q&A, chatbots, and document summarization. Proficient in LLMs (e.g., GPT, Lang chain, llama index) and open-source LLMs Cloud Platforms: Hands-on experience in Azure, AWS, GCP, with application development skills in Python, Docker, and Git. Good to have Skills: Exposure to big data technologies such as Hadoop, Hive MapReduce etc. Qualifications Basic Qualifications: .Tech / Masters in a quantitative discipline (e.g. Applied Mathematics, Computer Science, Bioinformatics, Statistics; Ops Research, Econometrics)

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Tasks Experience Experience in building and managing data pipelines. experience with development and operations of data pipelines in the cloud (Preferably Azure.) Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark Deep expertise in architecting and data pipelines in cloud using cloud native technologies. Good experience in both ETL and ELT Ingestion patterns Hands-on experience working on large volumes of data(Petabyte scale) with distributed compute frameworks. Good Understanding of container platforms Kubernetes and docker Excellent knowledge and experience with object-oriented programming Familiarity developing with RESTful API interfaces. Experience in markup languages such as JSON and YAML Proficient in relational database design and development Good knowledge on Data warehousing concepts Working experience with agile scrum methodology Technical Skills Strong skills in distributed cloud Data analytics platforms like Databricks, HD insight, EMR cluster etc. Strong in Programming Skills -Python/Java/R/Scala etc. Experience with stream-processing systems: Kafka, Apache Storm, Spark-Streaming, Apache Flink, etc. Hands-on working knowledge in cloud data lake stores like Azure Data Lake Storage. Data pipeline orchestration with Azure Data Factory, Amazon Data Pipeline Good Knowledge on File Formats like ORC, Parquet, Delta, Avro etc. Good Experience in using SQL and No-SQL. databases like MySQL, Elasticsearch, MongoDB, PostgreSQL and Cassandra running huge volumes of data Strong experience in networking and security measures Proficiency with CI/CD automation, and specifically with DevOps build and release pipelines Proficiency with Git, including branching/merging strategies, Pull Requests, and basic command line functions Strong experience in networking and security measures Good Data Modelling skills Job Responsibilities Cloud Analytics, Storage, security, resiliency and governance Building and maintaining the data architecture for data Engineering and data science projects Extract Transform and Load data from sources Systems to data lake or Datawarehouse leveraging combination of various IaaS or SaaS components Perform compute on huge volume of data using open-source projects like Databricks/spark or Hadoop Define table schema and quickly adapt with the pipeline Working with High volume unstructured and streaming datasets Responsible to manage NoSQL Databases on Cloud (AWS, Azure etc.) Architect solutions to migrate projects from On-premises to cloud Research, investigate and implement newer technologies to continually evolve security capabilities Identify valuable data sources and automate collection processes Implement adequate networking and security measures for the data pipeline Implement monitoring solution for the data pipeline Support the design, and implement data engineering solutions Maintain excellent documentation for understanding and accessing data storage Work independently as well as in teams to deliver transformative solutions to clients Be proactive and constantly pay attention to the scalability, performance and availability of our systems Establishes privacy/security hierarchy and regulates access Collaborate with engineering and product development teams Systematic problem-solving approach with strong communication skills and a sense of ownership and drive Qualifications Bachelors degree or Masters in Computer Science or relevant streams Any Relevant cloud data engineering certification

Posted 2 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 3 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function : 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. Key Skills & Technology Areas: AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) Observability Integration: Splunk, ELK Stack, Prometheus. DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: Proven ability to work independently and collaboratively in agile, innovation-driven teams. Strong problem-solving mindset and product-oriented thinking. Excellent communication and technical storytelling skills. Flexibility to work across multiple opportunities based on business priorities. Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI

Posted 2 weeks ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Gurugram

Work from Office

Were looking for a Big Data Engineer who can find creative solutions to tough problems. As a Big Data Engineer, youll create and manage our data infrastructure and tools, including collecting, storing, processing and analyzing our data and data systems. You know how to work quickly and accurately, using the best solutions to analyze mass data sets, and you know how to get results. Youll also make this data easily accessible across the company and usable in multiple departments. Skillset Required Bachelors Degree or more in Computer Science or a related field. A solid track record of data management showing your flawless execution and attention to detail. Strong knowledge of and experience with statistics. Programming experience, ideally in Python, Spark, Kafka or Java, and a willingness to learn new programming languages to meet goals and objectives. Experience in C, Perl, Javascript or other programming languages is a plus. Knowledge of data cleaning, wrangling, visualization and reporting, with an understanding of the best, most efficient use of associated tools and applications to complete these tasks. Experience in MapReduce is a plus. Deep knowledge of data mining, machine learning, natural language processing, or information retrieval. Experience processing large amounts of structured and unstructured data, including integrating data from multiple sources. Experience with machine learning toolkits including, H2O, SparkML or Mahout A willingness to explore new alternatives or options to solve data mining issues, and utilize a combination of industry best practices, data innovations and your experience to get the job done. Experience in production support and troubleshooting. You find satisfaction in a job well done and thrive on solving head-scratching problems.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Valorem Reply is looking for a Data Engineer with experience building and contributing to the design of database systems, both normalized transactional systems and dimensional reporting systems. Strong experience with SQL Server as a database engine, as well as Microsoft and Databricks technology experience implementing Big Data with Advanced Analytics solutions. Successful candidates will have experience and skill in providing solutions for storing, retrieving, transforming and aggregating data to support line of business applications as well as reporting systems. The Data Engineer will work with customers to deliver solutions utilizing strong business, technical and data modelling skills. This position will represent Valorem's approach to data visualization and information delivery solutions and as such must demonstrate proficiency with Power BI and Azure Data Services. Key Responsibilities Design and implement general architecture for complex data systems Translate business requirements into functional and technical specifications Design and implement lakehouse architecture Develop and manage cloud-based data architecture and reporting solutions Apply data modelling principles for relational and dimensional data structures Design Data Warehouses following established principles (e.g.,Kimball, Inmon) Create and manage source-to-target mappings for ETL/ELT processes Mentor junior engineers and contribute to architectural decisions and code reviews Minimum Qualifications Bachelor’s degree in computer science, Computer Engineering, MIS, or related field 5+years of experience with Microsoft SQL Server and strong proficiency in T- SQL, SQL performance tuning (Indexing, Structure, Query Optimization) 5+years of experience in Microsoft data platform development and implementation 5+years of experience with PowerBI or other competitive technologies 3+years of experience in consulting, with a focus on analytics and data solutions 2+ years of experience with Databricks, including Unity Catalog, Databricks SQL, Workflows, and Delta Sharing Proficiency in Python and Apache Spark Develop and manage Databricks notebooks for data transformation, exploration, and model deployment Expertise in Microsoft Azure services, including Azure SQL, Azure Data Factory (ADF), Azure Data Warehouse (Synapse Analytics), Azure Data Lake, and Stream Analytics Experience with Microsoft Fabric Familiarity with CI/CD pipelines and infrastructure-as-code tools like Terraform or Azure Resource Manager (ARM) Knowledge of taxonomies, metadata management, and masterdata management Familiarity with data stewardship, ownership, and data quality management Expertise in Big Data technologies and tools: BigData Platforms: HDFS, MapReduce, Pig, Hive General DBMS experience with Oracle, DB2, MySQL, etc No SQL databases such as HBase, Cassandra, DataStax, MongoDB, CouchDB, etc Experience with non-Microsoft reporting and BI tools, such as Qlik, Cognos, MicroStrategy, Tableau, etc About Reply Reply specializes in the design and implementation of solutions based on new communication channels and digital media. Reply is a network of highly specialized companies supporting global industrial groups operating in the telecom and media, industry and services, banking, insurance and public administration sectors in the definition and development of business models enabled for the new paradigms of AI, cloud computing, digital media and the Internet of Things. Reply services include Consulting, System Integration and Digital Services. www.reply.com

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Description The Manager, Analytics, will be responsible for supporting the Director, Data, Performance and R&D Strategy lead with data and performance efforts for the entire Global Procurement organization. This role supports the documentation of scoring performance against Global Procurement's priorities and objectives. This includes reporting on data gathering requirements, goals, priorities, and documenting key performance indicators (KPIs) for the procurement portfolio. This role will support Global Procurement by providing appropriate data to generate value opportunities ensuring realization. This role will support and enable the development and implementation of all initiatives within the Procurement multi-year functional strategic roadmap that will focus on analytics capabilities. This role plays a part in managing procurement activities strategically and efficiently & identifying areas of continuous improvement / efficiencies where applicable. Major Responsibilities And Accountabilities Data and Analytics Delivers analytics metrics & dashboard including, but not limited to, Sourcing events, Supplier, Contracts, Spend, Savings, Market Intelligence, Cost Intelligence to successfully achieve business objectives Partners with BIA, Procurement and IT Teams to deliver necessary data management tools and system solutions, identify business challenges; use fact-based solutions and data analysis to help influence changes to operations, process or programs; and, champions movement to an organizational 'Lead with Data' mindset. Collaborate effectively across matrix environment, build strong partnerships, good interpersonal, presentation, communication & negotiation skills Ability to manage multiple projects and priorities effectively Very well versed with business, data and technical language to connect processes, tools & data Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner. Maintains and ensures quality assurance of key data sets, reports and metrics that are relevant and insightful and highlight key trends in human capital dynamics. Good communication & presentation skills Performance Scorecard & Maintenance: Support the end-to-end performance reporting of functional strategic roadmap via development of the Global Procurement and functional team scorecards, including development of metrics aligned to functional vision and strategic roadmap Manage ongoing reporting and monitoring of key metrics including liaising with key stakeholders across all of Global Procurement for progress updates, etc. Analyze performance trends, proactively identify potential shortfalls and risks and make fact-based recommendations to close gaps against targets Report status to leadership and functional area teams as appropriate Internal/External Stakeholders: Other functional strategy leads Management in BMS's Global Procurement organization Global Procurement Category Managers, Sourcing Managers and Business Partners Minimum Requirements: BA/BS in a quantitative major or concentration required. 5+ years of experience developing and using advanced analytics and reporting techniques 3+ years of experience in performing Procurement analytics or relevant experience Advanced experience in Tableau and Power BI Ability to work in a fast-paced global environment with multiple competing priorities Experience in supporting new capability development, pilots, and integration Experience in leveraging methods such as Design Thinking and Human Center Design to generate high value questions. Analytical mindset, intellectual curiosity, creativity, strong attention to detail and execution skills Experience working with tools across the analytic stack including data management tools; like MapReduce/Hadoop, SPSS/R, SAS, and Workday for data management, advanced analysis, and insights, along with ETL tools (Tableau, Power BI) for data integration. Leverage procurement systems such as SAP Ariba, Oracle Procurement Cloud etc for process management, spend analysis, and decision support as needed. Proficiency in English Preferred Qualifications: M.S./M.B.A. Professional certifications (e.g. CPM, CPIM). Membership in Professional Associations, e.g. ISM If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Palantir Foundry Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Additionally, you will monitor project progress, address any challenges that arise, and facilitate communication among team members to foster a productive work environment. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior professionals to support their growth and development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Palantir Foundry.- Strong understanding of application design and development principles.- Experience with data integration and management within Palantir Foundry.- Ability to troubleshoot and resolve application-related issues effectively.- Familiarity with agile methodologies and project management practices. Additional Information:- The candidate should have minimum 3 years of experience in Palantir Foundry.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 years

2 - 7 Lacs

Hyderābād

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science or a related technical field, or equivalent practical experience. 3 years of experience in building data and Artificial Intelligence (AI) solutions and working with technical customers. Experience in designing cloud enterprise solutions and supporting customer projects to completion. Preferred qualifications: Experience in working with Large Language Models, data pipelines, and with data analytics, data visualization techniques. Experience with Data Extract, Transform, and Load (ETL) techniques. Experience in Large Language Models (LLMs) to deploy multimodal solutions involving Text, Image, Video and Voice. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, Extract, Transform, and Load/Extract, Load and Transform and investigative tools and environments (e.g., Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent communication skills. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. In this role, you will play a role in ensuring that customers have the quality experience moving to the Google Cloud Generative AI (GenAI) and Agentic AI suite of products. You will design and implement solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with Generative AI (GenAI), and deliver workshops designed to educate and empower customers to realize the potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product issues, and address customer and partner needs. You will lead the execution of adopting the Google Cloud Platform solutions to the customer.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver big data and GenAI solutions and solve technical customer tests. Act as a trusted technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 2 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 5 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function: 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

3 - 8 Lacs

Gurgaon

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Pune, Maharashtra, India; Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science or a related technical field, or equivalent practical experience. 3 years of experience in building data and Artificial Intelligence (AI) solutions and working with technical customers. Experience in designing cloud enterprise solutions and supporting customer projects to completion. Preferred qualifications: Experience in working with Large Language Models, data pipelines, and with data analytics, data visualization techniques. Experience with Data Extract, Transform, and Load (ETL) techniques. Experience in Large Language Models (LLMs) to deploy multimodal solutions involving Text, Image, Video and Voice. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, Extract, Transform, and Load/Extract, Load and Transform and investigative tools and environments (e.g., Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. Excellent communication skills. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. In this role, you will play a role in ensuring that customers have the quality experience moving to the Google Cloud Generative AI (GenAI) and Agentic AI suite of products. You will design and implement solutions for customer use cases, leveraging core Google products. You will work with customers to identify opportunities to transform their business with Generative AI (GenAI), and deliver workshops designed to educate and empower customers to realize the potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product issues, and address customer and partner needs. You will lead the execution of adopting the Google Cloud Platform solutions to the customer.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver big data and GenAI solutions and solve technical customer tests. Act as a trusted technical advisor to Google’s customers. Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Pune

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands, Experience in Concurrent design and multi-threading. Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies