Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
10 - 12 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. Do 1.Develop architectural solutions for the new deals/ major change requests in existing deals Creates an enterprise-wide architecture that ensures systems are scalable, reliable, and manageable. Provide solutioning of RFPs received from clients and ensure overall design assurance Develop a direction to manage the portfolio of to-be-solutions including systems, shared infrastructure services, applications in order to better match business outcome objectives Analyse technology environment, enterprise specifics, client requirements to set a collaboration solution design framework/ architecture Provide technical leadership to the design, development and implementation of custom solutions through thoughtful use of modern technology Define and understand current state solutions and identify improvements, options & tradeoffs to define target state solutions Clearly articulate, document and sell architectural targets, recommendations and reusable patterns and accordingly propose investment roadmaps Evaluate and recommend solutions to integrate with overall technology ecosystem Works closely with various IT groups to transition tasks, ensure performance and manage issues through to resolution Perform detailed documentation (App view, multiple sections & views) of the architectural design and solution mentioning all the artefacts in detail Validate the solution/ prototype from technology, cost structure and customer differentiation point of view Identify problem areas and perform root cause analysis of architectural design and solutions and provide relevant solutions to the problem Collaborating with sales, program/project, consulting teams to reconcile solutions to architecture Tracks industry and application trends and relates these to planning current and future IT needs Provides technical and strategic input during the project planning phase in the form of technical architectural designs and recommendation Collaborates with all relevant parties in order to review the objectives and constraints of solutions and determine conformance with the Enterprise Architecture Identifies implementation risks and potential impacts 2.Enable Delivery Teams by providing optimal delivery solutions/ frameworks Build and maintain relationships with executives, technical leaders, product owners, peer architects and other stakeholders to become a trusted advisor Develops and establishes relevant technical, business process and overall support metrics (KPI/SLA) to drive results Manages multiple projects and accurately reports the status of all major assignments while adhering to all project management standards Identify technical, process, structural risks and prepare a risk mitigation plan for all the projects Ensure quality assurance of all the architecture or design decisions and provides technical mitigation support to the delivery teams Recommend tools for reuse, automation for improved productivity and reduced cycle times Leads the development and maintenance of enterprise framework and related artefacts Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams Ensures architecture principles and standards are consistently applied to all the projects Ensure optimal Client Engagement Support pre-sales team while presenting the entire solution design and its principles to the client Negotiate, manage and coordinate with the client teams to ensure all requirements are met and create an impact of solution proposed Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor 3.Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc 4.Team Management Resourcing Anticipating new talent requirements as per the market/ industry trends or client requirements Hire adequate and right resources for the team Talent Management Ensure adequate onboarding and training for the team members to enhance capability & effectiveness Build an internal talent pool and ensure their career progression within the organization Manage team attrition Drive diversity in leadership positions Performance Management Set goals for the team, conduct timely performance reviews and provide constructive feedback to own direct reports Ensure that the Performance Nxt is followed for the entire team Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Mandatory Skills: DataBricks - Data Engineering.
Posted 2 weeks ago
5.0 - 8.0 years
7 - 12 Lacs
Pune
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure 1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2Team ManagementProductivity, efficiency, absenteeism 3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering.
Posted 2 weeks ago
8.0 - 13.0 years
20 - 25 Lacs
Pune, Bangalore Rural, Gurugram
Work from Office
Desired Skills and experience 9+ years of experience in software development with a focus on data projects using Python, PySpark, and associated frameworks. Proven experience as a Data Engineer with experience in Azure cloud. Experience implementing solutions using Azure cloud services, Azure Data Factory, Azure Lake Gen 2, Azure Databases, Azure Data Fabric, API Gateway management, Azure Functions. Strong SQL skills with RDMS or NoSQL databases. Experience with developing APIs using FastAPI or similar frameworks in Python. Familiarity with the DevOps lifecycle (git, Jenkins, etc.), CI/CD processes. Good understanding of ETL/ELT processes. Experience in the financial services industry, financial instruments, asset classes, and market data are a plus. Assist stakeholders with data-related technical issues and support their data infrastructure needs. Develop and maintain documentation for data pipeline architecture, development processes, and data governance. In-depth knowledge of data warehousing concepts, architecture, and implementation. Extremely strong organizational and analytical skills with strong attention to detail. Strong track record of excellent results delivered to internal and external clients. Excellent problem-solving skills, with the ability to work independently or as part of a team. Strong communication and interpersonal skills, with the ability to effectively engage with both technical and non-technical stakeholders. Able to work independently without the need for close supervision and collaboratively as part of cross-team efforts. Preferred candidate profile
Posted 2 weeks ago
8.0 - 13.0 years
15 - 25 Lacs
Pune
Hybrid
About This Role : We are looking for a talented and experienced Data Engineer with Tech Lead with hands-on expertise in any ETL Tool with full knowledge about CI/CD practices with leading a team technically more than 5 and client facing and create Data Engineering, Data Quality frameworks. As a tech lead must ensure to build ETL jobs, Data Quality Jobs, Big Data Jobs performed performance optimization by understanding the requirements, create re-usable assets and able to perform production deployment and preferably worked in DWH appliances Snowflake / redshift / Synapse Responsibilities Work with a team of engineers in designing, developing, and maintaining scalable and efficient data solutions using Any Data Integration (any ETL tool like Talend / Informatica) and any Big Data technologies. Design, develop, and maintain end-to-end data pipelines using Any ETL Data Integration (any ETL tool like Talend / Informatica) to ingest, process, and transform large volumes of data from heterogeneous sources. Have good experience in designing cloud pipelines using Azure Data Factory or AWS Glues/Lambda. Implemented Data Integration end to end with any ETL technologies. Implement database solutions for storing, processing, and querying large volumes of structured and unstructured and semi-structured data Implement Job Migrations of ETL Jobs from Older versions to New versions. Implement and write advanced SQL scripts in SQL Database at medium to expert level. Work with technical team with client and provide guidance during technical challenges. Integrate and optimize data flows between various databases, data warehouses, and Big Data platforms. Collaborate with cross-functional teams to gather data requirements and translate them into scalable and efficient data solutions. Optimize ETL, Data Load performance, scalability, and cost-effectiveness through optimization techniques. Interact with Client on a daily basis and provide technical progress and respond to technical questions. Implement best practices for data integration. Implement complex ETL data pipelines or similar frameworks to process and analyze massive datasets. Ensure data quality, reliability, and security across all stages of the data pipeline. Troubleshoot and debug data-related issues in production systems and provide timely resolution. Stay current with emerging technologies and industry trends in data engineering technologies, CI/CD, and incorporate them into our data architecture and processes. Optimize data processing workflows and infrastructure for performance, scalability, and cost-effectiveness. Provide technical guidance and foster a culture of continuous learning and improvement. Implement and automate CI/CD pipelines for data engineering workflows, including testing, deployment, and monitoring. Perform migration to production deployment from lower environments, test & validate Must Have Skills Must be certified in any ETL tools, Database, Cloud.(Snowflake certified is more preferred) Must have implemented at least 3 end-to-end projects in Data Engineering. Must have worked on performance management optimization and tuning for data loads, data processes, data transformation in big data Must be flexible to write code using JAVA/Scala/Python etc. as required Must have implemented CI/CD pipelines using tools like Jenkins, GitLab CI, or AWS CodePipeline. Must have managed a team technically of min 5 members and guided the team technically. Must have the Technical Ownership capability of Data Engineering delivery. Strong communication capabilities with client facing. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5 years of experience in software engineering or a related role, with a strong focus on Any ETL Tool, database, integration. Proficiency in Any ETL tools like Talend , Informatica etc for Data Integration for building and orchestrating data pipelines. Hands-on experience with relational databases such as MySQL, PostgreSQL, or Oracle, and NoSQL databases such as MongoDB, Cassandra, or Redis. Solid understanding of database design principles, data modeling, and SQL query optimization. Experience with data warehousing, Data Lake , Delta Lake concepts and technologies, data modeling, and relational databases.
Posted 2 weeks ago
7.0 - 12.0 years
35 - 40 Lacs
Pune
Work from Office
Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune . We are looking for candidates with 7 + years of experience in below skills - Primary skills : Understanding of AI ML in DE Python Data Engineers Database -Big query or Snowflake Interested candidates for above position kindly share your CVs on chitralekha.so@peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted 2 weeks ago
5.0 - 8.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities 3+ years of experience in Spark, Databricks, Hadoop, Data and ML Engineering. 3+ Years on experience in designing architectures using AWS cloud services & Databricks. Architecture, design and build Big Data Platform (Data Lake / Data Warehouse / Lake house) using Databricks services and integrating with wider AWS cloud services. Knowledge & experience in infrastructure as code and CI/CD pipeline to build and deploy data platform tech stack and solution. Hands-on spark experience in supporting and developing Data Engineering (ETL/ELT) and Machine learning (ML) solutions using Python, Spark, Scala or R languages. Distributed system fundamentals and optimising Spark distributed computing. Experience in setting up batch and streams data pipeline using Databricks DLT, jobs and streams. Understand the concepts and principles of data modelling, Database, tables and can produce, maintain, and update relevant data models across multiple subject areas. Design, build and test medium to complex or large-scale data pipelines (ETL/ELT) based on feeds from multiple systems using a range of different storage technologies and/or access methods, implement data quality validation and to create repeatable and reusable pipelines Experience in designing metadata repositories, understanding range of metadata tools and technologies to implement metadata repositories and working with metadata. Understand the concepts of build automation, implementing automation pipelines to build, test and deploy changes to higher environments. Define and execute test cases, scripts and understand the role of testing and how it works. Preferred candidate profile Big Data technologies Databricks, Spark, Hadoop, EMR or Hortonworks. Solid hands-on experience in programming languages Python, Spark, SQL, Spark SQL, Spark Streaming, Hive and Presto Experience in different Databricks components and API like notebooks, jobs, DLT, interactive and jobs cluster, SQL warehouse, policies, secrets, dbfs, Hive Metastore, Glue Metastore, Unity Catalog and ML Flow. Knowledge and experience in AWS Lambda, VPC, S3, EC2, API Gateway, IAM users, roles & policies, Cognito, Application Load Balancer, Glue, Redshift, Spectrum, Athena and Kinesis. Experience in using source control tools like git, bit bucket or AWS code commit and automation tools like Jenkins, AWS Code build and Code deploy. Hands-on experience in terraform and Databricks API to automate infrastructure stack. Experience in implementing CI/CD pipeline and ML Ops pipeline using Git, Git actions or Jenkins. Experience in delivering project artifacts like design documents, test cases, traceability matrix and low-level design documents. Build references architectures, how-tos, and demo applications for customers. Ready to complete certifications
Posted 2 weeks ago
2.0 - 4.0 years
8 - 12 Lacs
Mumbai
Work from Office
The SAS to Databricks Migration Developer will be responsible for migrating existing SAS code, data processes, and workflows to the Databricks platform. This role requires expertise in both SAS and Databricks, with a focus on converting SAS logic into scalable PySpark and Python code. The developer will design, implement, and optimize data pipelines, ensuring seamless integration and functionality within the Databricks environment. Collaboration with various teams is essential to understand data requirements and deliver solutions that meet business needs
Posted 2 weeks ago
2.0 - 5.0 years
4 - 8 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA Senior GenAI Data Engineer We are seeking an experienced Senior Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What you'll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Requirements: Bachelors degree in computer science, Engineering, or related fields (Master's recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs Location: Delhi or Bangalore Workplace type : Hybrid Working
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai, Hyderabad, Bengaluru
Hybrid
Your day at NTT DATA The Software Applications Development Engineer is a seasoned subject matter expert, responsible for developing new applications and improving upon existing applications based on the needs of the internal organization and or external clients. What you'll be doing Yrs. Of Exp: 5 Yrs. Data Engineer- Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. Work closely with Data modeller to ensure data models support the solution design Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. Develop documentation and artefacts to support projects.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs
Posted 2 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
Bengaluru
Hybrid
Additional Career Level Description: Knowledge and application: Seasoned, experienced professional; has complete knowledge and understanding of area of specialization. Uses evaluation, judgment, and interpretation to select right course of action. Problem solving: Works on problems of diverse scope where analysis of information requires evaluation of identifiable factors. Resolves and assesses a wide range of issues in creative ways and suggests variations in approach. Interaction: Enhances relationships and networks with senior internal/external partners who are not familiar with the subject matter often requiring persuasion. Works with others outside of own area of expertise, with the ability to adapt style to differing audiences and often advises others on difficult matters. Impact: Impacts short to medium term goals through personal effort or influence over team members. Accountability: Accountable for own targets with work reviewed at critical points. Work is done independently and is reviewed at critical points.
Posted 2 weeks ago
6.0 - 10.0 years
7 - 14 Lacs
Bengaluru
Hybrid
Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards
Posted 2 weeks ago
5.0 - 8.0 years
22 - 35 Lacs
Bengaluru
Work from Office
Role and Responsibilities: You will be embedded within a team of machine learning engineers and data scientists; responsible for building and productizing generative AI and deep learning solutions. You will: Design, develop and deploy production ready scalable solutions that utilizes GenAI, Traditional ML models, Data science and ETL pipelines Collaborate with cross-functional teams to integrate AI-driven solutions into business operations. Build and enhance frameworks for automation, data processing, and model deployment. Utilize Gen-AI tools and workflows to improve the efficiency and effectiveness of AI solutions. Conduct research and stay updated with the latest advancements in generative AI and related technologies. Deliver key product features within cloud analytics. Requirements: B. Tech, M. Tech or PhD in Computer Science, Data Science, Electrical Engineering, Statistics, Maths, Operations Research or related domain. Strong programming skills in Python, SQL and solid fundamentals in computer science, particularly in algorithms, data structures, and OOP. Experience with building end-to-end solutions on AWS cloud infra. Good understanding of internals and schema design for various data stores (RDBMS, Vector databases and NoSQL). Experience with Gen-AI tools and workflows, and large language models (LLMs). Experience with cloud platforms and deploying models at scale. Strong analytical and problem-solving skills with a keen attention to detail. Strong knowledge of statistics, probability, and estimation theory. Desired Skills: Familiarity with frameworks such as PyTorch, TensorFlow and Hugging Face. Experience with data visualization tools like Tableau, Graphana, Plotly-Dash. Exposure to AWS services like Kinesis, SQS, EKS, ASG, lambda etc. Expertise in at least one popular Python web-framework (like FastAPI, Django or Flask). Exposure to quick prototyping using Streamlit, Gradio, Dash etc. Exposure to Big Data processing (Snowflake, Redshift, HDFS, EMR)
Posted 2 weeks ago
7.0 - 12.0 years
22 - 30 Lacs
Pune, Bengaluru
Hybrid
Job Role & responsibilities: - Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skill, Qualification & experience required:- 7-10 years of experience in Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Data Engineer, Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 2 weeks ago
5.0 - 10.0 years
27 - 32 Lacs
Gurugram, Bengaluru
Work from Office
Job Title - Technical Project Manager Location - Gurgaon/ Bangalore Nature of Job - Permanent Department - data analytics What you will be doing Demonstrated client servicing and business analytics skills with at least 5 - 9 years of experience as data engineer, BI developer, data analyst, technical project manager, program manager etc. Technical project management- drive BRD, project scope, resource allocation, team coordination, stakeholder communication, UAT, Prod fix, change requests, project governance Sound knowledge of banking industry (payments, retail operations, fraud etc.) Strong ETL experience or experienced Teradata developer Managing team of business analysts, BI developers, ETL developers to ensure that projects are completed on time Responsible for providing thought leadership and technical advice on business issues Design methodological frameworks and solutions. What were looking for Bachelors/masters degree in computer science/data science/AI/statistics, Certification in Gen AI. Masters degree Preferred. Manage multiple projects, at a time, from inception to delivery Superior problem-solving, analytical, and quantitative skills Entrepreneurial mindset, coupled with a “can do” attitude Demonstrated ability to collaborate with cross-functional, cross-border teams and coach / mentor colleagues.
Posted 2 weeks ago
4.0 - 6.0 years
8 - 18 Lacs
Bengaluru
Work from Office
We are seeking a skilled Data Engineer & Data Analyst with over 4 years of experience to design, build, and maintain scalable data pipelines and perform advanced data analysis to support business intelligence and data-driven decision-making. The ideal candidate will have a strong foundation in computer science principles, extensive experience with SQL and big data tools, and proficiency in cloud platforms and data visualization tools. Key Responsibilities: Design, develop, and maintain robust, scalable ETL pipelines using Apache Airflow, DBT, Composer (GCP), Control-M, Cron, Luigi, and similar tools. Build and optimize data architectures including data lakes and data warehouses. Integrate data from multiple sources ensuring data quality and consistency. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Analyze complex datasets to identify trends, generate actionable insights, and support decision-making. Develop and maintain dashboards and reports using Tableau, Power BI, and Jupyter Notebooks for visualization and pipeline validation. Manage and optimize relational and NoSQL databases such as MySQL, PostgreSQL, Oracle, MongoDB, and DynamoDB. Work with big data tools and frameworks including Hadoop, Spark, Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Utilize cloud data services and warehouses like AWS Glue, GCP Dataflow, Azure Data Factory, Snowflake, Redshift, and BigQuery. Support CI/CD pipelines and DevOps workflows using Git, Docker, Terraform, and related tools. Ensure data governance, security, and compliance standards are met. Participate in Agile and DevOps processes to enhance data engineering workflows. Required Qualifications: 4+ years of professional experience in data engineering and data analysis roles. Strong proficiency in SQL and experience with database management systems such as MySQL, PostgreSQL, Oracle, and MongoDB. Hands-on experience with big data tools like Hadoop and Apache Spark. Proficient in Python programming. Experience with data visualization tools such as Tableau, Power BI, and Jupyter Notebooks. Proven ability to design, build, and maintain scalable ETL pipelines using tools like Apache Airflow, DBT, Composer (GCP), Control-M, Cron, and Luigi. Familiarity with data engineering tools including Hive, Kafka, Informatica, Talend, SSIS, and Dataflow. Experience working with cloud data warehouses and services (Snowflake, Redshift, BigQuery, AWS Glue, GCP Dataflow, Azure Data Factory). Understanding of data modeling concepts and data lake/data warehouse architectures. Experience supporting CI/CD practices with Git, Docker, Terraform, and DevOps workflows. Knowledge of both relational and NoSQL databases, including PostgreSQL, BigQuery, MongoDB, and DynamoDB. Exposure to Agile and DevOps methodologies. Experience with at least one cloud platform: Google Cloud Platform (BigQuery, Dataflow, Composer, Cloud Storage, Pub/Sub) Amazon Web Services (S3, Glue, Redshift, Lambda, Athena) Microsoft Azure (Data Factory, Synapse Analytics, Blob Storage) Preferred Skills: Strong problem-solving and communication skills. Ability to work independently and collaboratively in a team environment. Experience with service development, REST APIs, and automation testing is a plus. Familiarity with version control systems and workflow automation.
Posted 2 weeks ago
3.0 - 6.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions. ? Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options 3. Team Management a. Talent Management i. Support on boarding and training to enhance capability & effectiveness ? Deliver No. Performance Parameter Measure 1. Demand generation # PoC supported 2. Revenue generation through delivery Timeliness, customer success stories, customer use cases 3. Capability Building & Team Management # Skills acquired ? ? Mandatory Skills: Data Analysis. Experience3-5 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 weeks ago
5.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role _x000D_ Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? _x000D_ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? _x000D_ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? _x000D_ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? _x000D_ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop_x000D_. Experience5-8 Years_x000D_. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 weeks ago
4.0 - 8.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Python, NumPy, Pandas, Rest API Strong in Data Engineering and Analysis, SQL Server (Complex SQL) This is hands on role. The role will involve design, coding, testing, working with product owners / scrum master for scrum planning, estimation, demos & leading guiding junior developers as needed. Years of experience 6 to 10 years Mandatory tech skills - Python, NumPy, Pandas, Strong in Data Engineering and Analysis, SQL Server Good in writing Python code. Hands-on experience with pandas and NumPy stack Able to perform data cleanup. Summarization using NumPy/pandas. SQL knowledge is Essential. Must have expertise in Rest API development Cloud experience is preferred (Azure). Uses pertinent data and facts to identify and solve a range of problems within area of expertise Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreement Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver No Performance Parameter Measure 1 ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2 Team ManagementProductivity, efficiency, absenteeism 3 Capability developmentTriages completed, Technical Test performance
Posted 2 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Remote
Job Requirement for Offshore Data Engineer (with ML expertise) Work Mode: Remote Base Location: Bengaluru Experience: 5+ Years Technical Skills & Expertise: PySpark & Apache Spark: Extensive experience with PySpark and Spark for big data processing and transformation. Strong understanding of Spark architecture, optimization techniques, and performance tuning. Ability to work with Spark jobs in distributed computing environments like Databricks. Data Mining & Transformation: Hands-on experience in designing and implementing data mining workflows. Expertise in data transformation processes, including ETL (Extract, Transform, Load) pipelines. Experience in large-scale data ingestion, aggregation, and cleaning. Programming Languages: Python & Scala: Proficient in Python for data engineering tasks, including using libraries like Pandas and NumPy. Scala proficiency is preferred for Spark job development. Big Data Concepts: In-depth knowledge of big data frameworks and paradigms, such as distributed file systems, parallel computing, and data partitioning. Big Data Technologies: Cassandra & Hadoop: Experience with NoSQL databases like Cassandra and distributed storage systems like Hadoop. Data Warehousing Tools: Proficiency with Hive for data warehousing solutions and querying. ETL Tools: Experience with Beam architecture and other ETL tools for large-scale data workflows. Cloud Technologies (GCP): Expertise in Google Cloud Platform (GCP), including core services like Cloud Storage, BigQuery, and DataFlow. Experience with DataFlow jobs for batch and stream processing. Familiarity with managing workflows using Airflow for task scheduling and orchestration in GCP. Machine Learning & AI: GenAI Experience: Familiarity with Generative AI and its applications in ML pipelines. ML Model Development: Knowledge of basic ML model building using tools like Pandas, NumPy, and visualization with Matplotlib. ML Ops Pipeline: Experience in managing end-to-end ML Ops pipelines for deploying models in production, particularly LLM (Large Language Models) deployments. RAG Architecture: Understanding and experience in building pipelines using Retrieval-Augmented Generation (RAG) architecture to enhance model performance and output. Tech stack : Spark, Pyspark, Python, Scala, GCP data flow, Data composer (Air flow), ETL, Databricks, Hadoop, Hive, GenAI, ML Modeling basic knowledge, ML Ops experience , LLM deployment, RAG
Posted 2 weeks ago
2.0 - 7.0 years
8 - 12 Lacs
Hyderabad
Work from Office
About The Role Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters ? Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities ? 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ? 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ? Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
About The Role Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions ? Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Collaborate with sales, pre-sales &consulting team to assist in creating solutions and propositions for proactive demand generation ii. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for recruitment, joint research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options c. Integrate model performance management tools into the current business infrastructure 3. Team Management a. Resourcing i. Support recruitment process to on-board right resources for the team b. Talent Management i. Support on boarding and training for the team members to enhance capability & effectiveness ii. Manage team attrition c. Performance Management i. Conduct timely performance reviews and provide constructive feedback to own direct reports ii. Be a role model to team for five habits iii. Ensure that the Performance Nxt is followed for the entire team d. Employee Satisfaction and Engagement i. Lead and drive engagement initiatives for the team ? Deliver No. Performance Parameter Measure 1. Demand generation Order booking 2. Revenue generation through delivery Timeliness, customer success stories, customer use cases 3. Capability Building & Team Management % trained on new skills, Team attrition % ? ? Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 weeks ago
5.0 - 8.0 years
12 - 16 Lacs
Gurugram
Work from Office
About The Role Role Purpose The purpose of the role is to define, architect and lead delivery of machine learning and AI solutions ? Do 1. Demand generation through support in Solution development a. Support Go-To-Market strategy i. Collaborate with sales, pre-sales &consulting team to assist in creating solutions and propositions for proactive demand generation ii. Contribute to development solutions, proof of concepts aligned to key offerings to enable solution led sales b. Collaborate with different colleges and institutes for recruitment, joint research initiatives and provide data science courses 2. Revenue generation through Building & operationalizing Machine Learning, Deep Learning solutions a. Develop Machine Learning / Deep learning models for decision augmentation or for automation solutions b. Collaborate with ML Engineers, Data engineers and IT to evaluate ML deployment options c. Integrate model performance management tools into the current business infrastructure 3. Team Management a. Resourcing i. Support recruitment process to on-board right resources for the team b. Talent Management i. Support on boarding and training for the team members to enhance capability & effectiveness ii. Manage team attrition c. Performance Management i. Conduct timely performance reviews and provide constructive feedback to own direct reports ii. Be a role model to team for five habits iii. Ensure that the Performance Nxt is followed for the entire team d. Employee Satisfaction and Engagement i. Lead and drive engagement initiatives for the team ? Deliver No. Performance Parameter Measure 1. Demand generation Order booking 2. Revenue generation through delivery Timeliness, customer success stories, customer use cases 3. Capability Building & Team Management % trained on new skills, Team attrition % ? ? Mandatory Skills: Python for Data Science. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 2 weeks ago
5.0 - 10.0 years
27 - 42 Lacs
Faridabad
Work from Office
Role & responsibilities As a Data Engineer dedicated to projects, you will play a crucial role in designing and maintaining robust data architectures. This position requires 100% dedication to projects and Data Architecture Design: Design and implement scalable, reliable, and maintainable data architectures. Data Integration: Develop ETL processes to integrate data from various sources into a centralized data warehouse. Ensure data quality and integrity throughout the integration process. Database Management: Administer and optimize databases for high performance and availability. Implement security measures to safeguard data against unauthorized access. Data Modelling: Create and maintain data models for efficient storage and retrieval of information. Collaborate with data scientists and analysts to translate data needs into effective structures. Coding and Scripting: Utilize programming languages (e.g., Python, SQL) for developing and maintaining data pipelines. Performance Monitoring: Monitor data processing systems to identify and resolve performance bottlenecks. Good to have Skills: 1. Capability to create data pipelines for KPI Dashboards. 2. Optimization of databases (Cloud) for efficient resource utilization. 3. Expertise in various database technologies with the ability to apply technology to use cases. 4. Experience with database caching, decoupling the database from reports. 5. Proficient in using microservices for data ingestion.
Posted 2 weeks ago
3.0 - 6.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane