Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
6 - 10 Lacs
kolkata
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust endtoend solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets AWS Data Engineer Preferred skill sets AWS Data Engineer Years of experience required 4-8 years Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
ahmedabad
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Key Attributes Experience of implementing and delivering data solutions and pipelines on AWS Cloud Platform. Design/ implement, and maintain the data architecture for all AWS data services A strong understanding of data modelling, data structures, databases (Redshift), and ETL processes Work with stakeholders to identify business needs and requirements for datarelated projects Strong SQL and/or Python or PySpark knowledge Creating data models that can be used to extract information from various sources & store it in a usable format Optimize data models for performance and efficiency Write SQL queries to support data analysis and reporting Monitor and troubleshoot data pipelines Collaborate with software engineers to design and implement datadriven features Perform root cause analysis on data issues Maintain documentation of the data architecture and ETL processes Identifying opportunities to improve performance by improving database structure or indexing methods Maintaining existing applications by updating existing code or adding new features to meet new requirements Designing and implementing security measures to protect data from unauthorized access or misuse Recommending infrastructure changes to improve capacity or performance Experience in Process industry Mandatory skill sets Data Modelling, AWS, ETL Preferred skill sets Data Modelling, AWS, ETL Years of experience required 4-8 Years Education qualification BE, B.Tech, MCA, M.Tech Education Degrees/Field of Study required MBA (Master of Business Administration), Bachelor of Engineering, Bachelor of Technology Degrees/Field of Study preferred Required Skills Data Modeling Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship
Posted 2 weeks ago
4.0 - 10.0 years
6 - 12 Lacs
thane
Work from Office
Join Teleperformance Where Excellence Meets Opportunity! Teleperformance is a leading provider of customer experience management, offering premier omnichannel support to top global companies. Our diverse service locations, including on-site and work-at-home programs, ensure flexibility and broad reach. Why Choose Teleperformance We emphasize the importance of our employees, fostering enduring relationships within our teams and communities. Our dedication to employee satisfaction distinguishes us. Utilize advanced support technologies and processes engineered to achieve outstanding results. We cultivate lasting client relationships and make positive contributions to our local communities. Become Part of an Exceptional Team! Join Teleperformance, where our world-class workforce and innovative solutions drive success. Experience a workplace that values your development, supports your goals, and celebrates your accomplishments. Job Description "General Information Technology work involves managing or performing work across multiple areas of an organization s overall IT Platform/Infrastructure including analysis, development, and administration of: IT Systems Software, Hardware, and Databases Data & Voice Networks Data Processing Operations End User Technology & Software Support Conducts cost/benefit analyses for proposed IT projects as input to the organization s IT roadmap. Experienced Specialist in one specialized discipline as well as having a thorough understanding of related disciplines. Will most often be a driving force behind the development of new solutions for programs, complex projects, processes or activities. Serves as final decision/opinion maker in the area, coaches, mentors and trains others on the area of expertise. Ensures the implementation of short to medium term activities within the business area OR support sub-function in the context of the strategy for the department. Ensures appropriate policies, processes & standards are developed and implemented to support short to medium term tactical direction. Leads a team of Specialists , sometimes with several hierarchical levels, with full employee lifecycle responsibility. "
Posted 2 weeks ago
5.0 - 8.0 years
7 - 10 Lacs
pune
Work from Office
This position performs general administrative responsibilities including preparation of reports using various software packages, compilation of information from various sources, and handling small scale projects. This position performs general office duties that may include word processing, data entry, auditing documents, answering phones, distributing mail, reserving conference rooms, coordinating meetings and other duties as assigned. This position may deal with confidential material on a regular basis.
Posted 2 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
hyderabad
Work from Office
Are you ready to make an impact at DTCC Pay and Benefits: Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: At DTCC, the Observability team is at the forefront of ensuring the health, performance, and reliability of our critical systems and applications. We empower the organization with real-time visibility into infrastructure and business applications by leveraging cutting-edge monitoring, reporting, and visualization tools. Our team collects and analyzes metrics, logs, and traces using platforms like Splunk and other telemetry solutions. This data is essential for assessing application health and availability, and for enabling rapid root cause analysis when issues arise helping us maintain resilience in a fast-paced, high-volume trading environment. If youre passionate about observability, data-driven problem solving, and building systems that make a real-world impact, we d love to have you on our team. Primary Responsibilities: As a member of DTCC s Observability team, you will play a pivotal role in enhancing our monitoring and telemetry capabilities across critical infrastructure and business applications. Your responsibilities will include: Lead the migration from OpenText monitoring tools to Grafana and other open-source platforms. Design and deploy monitoring rules for infrastructure and business applications. Develop and manage alerting rules and notification workflows. Build real-time dashboards to visualize system health and performance. Configure and manage OpenTelemetry Collectors and Pipelines. Integrate observability tools with CI/CD, incident management, and cloud platforms. Deploy and manage observability agents across diverse environments. Perform upgrades and maintenance of observability platforms. Qualifications: Minimum of 07+ years of related experience. Bachelors degree preferred or equivalent experience. Talent needed for success Proven experience designing intuitive, real-time dashboards (e.g., in Grafana) that effectively communicate system health, performance trends, and business KPIs. Expertise in defining and tuning monitoring rules, thresholds, and alerting logic to ensure accurate and actionable incident detection. Strong understanding of both application-level and operating system-level metrics, including CPU, memory, disk I/O, network, and custom business metrics. Experience with structured log ingestion, parsing, and analysis using tools like Splunk, Fluentd, or OpenTelemetry. Familiarity with implementing and analyzing synthetic transactions and real user monitoring to assess end-user experience and application responsiveness. Hands-on experience with application tracing tools and frameworks (e.g., OpenTelemetry, Jaeger, Zipkin) to diagnose performance bottlenecks and service dependencies. Proficiency in configuring and using AWS CloudWatch for collecting and visualizing cloud-native metrics, logs, and events. Understanding of containerized environments (e.g., Docker, Kubernetes) and how to monitor container health, resource usage, and orchestration metrics. Ability to write scripts or small applications in languages such as Python, Java, or Bash to automate observability tasks and data processing. Experience with automation and configuration management tools such as Ansible, Terraform, Chef, or SCCM to deploy and manage observability components at scale. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. Please contact us to request accommodation.
Posted 2 weeks ago
8.0 - 13.0 years
30 - 35 Lacs
hyderabad
Work from Office
We exist to wow our customers. We know we re doing the right thing when we hear our customers say, How did we ever live without CoupangBorn out of an obsession to make shopping, eating, and living easier than ever, we re collectively disrupting the multi-billion-dollar e-commerce industry from the ground up. We are one of the fastest-growing e-commerce companies that established an unparalleled reputation for being a dominant and reliable force in South Korean commerce. We are proud to have the best of both worlds a startup culture with the resources of a large global public company. This fuels us to continue our growth and launch new services at the speed we have been at since our inception. We are all entrepreneurial surrounded by opportunities to drive new initiatives and innovations. At our core, we are bold and ambitious people that like to get our hands dirty and make a hands-on impact. At Coupang, you will see yourself, your colleagues, your team, and the company grow every day. Our mission to build the future of commerce is real. We push the boundaries of what s possible to solve problems and break traditional tradeoffs. Join Coupang now to create an epic experience in this always-on, high-tech, and hyper-connected world. As a Staff Software Engineer, Backend, you will work on distributed systems, data processing pipelines and building next generation platform and products. You will help the team to bring industry best practices in software development and operations while improving their engineering skills to build pioneering e-commerce experience in new global markets. Working closely with a group of engineers in multiple geographic locations, you ll solve challenge problems at scale with high reliability. Your efforts will directly have an impact on tens of millions of users every single day! Key Responsibilities: Drive the highest quality of architecture and design of data pipeline and system. Draw a roadmaps and vision for the scalable and robust growth of the online serving platform. Collaborate with other engineering teams to make the platform open and extensible to unlock innumerable opportunities for innovations. Align with stakeholders and lead engineers on mission critical projects. Decompose complex problems into simple, straightforward solutions. Possess expert knowledge in performance, scalability, and availability of data pipelines. Leverage knowledge of internal and industry best practices in design. Deep-dive and handle critical system issues. Collaborate with other teams to make the platform open and extensible to unlock innumerable opportunities for innovations. Qualifications: Bachelors degree and/or master s degree in computer science or equivalent. Minimum 8 years of experience working on software design and development in Java, Python. Hands-on experience with designing, building, and deploying scalable, high available data pipelines. Large system architecture design and development experience. Experience with cloud computing with AWS. Experience of developing container-based testing environment. Experience with Java / IntelliJ / Spring environment. Our Hybrid work model: Coupang hybrid work model is designed to enable a culture of collaboration that acts a catalyst to enrich the experience of employees. Employees are required to work at least 3 days in the office per week, with the flexibility to work from home 2 days a week, depending on the role requirement. Some businesses may require more time in office due to nature of work. Thank you, your preferences have been updated. By clicking "Accept All," you agree to the storing of cookies on your device to give you the most optimal experience using our website. We may also use cookies to enhance performance, analyze site usage and to personalize your experience.
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
pune
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop and Apache Kafka.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms such as AWS or Azure.- Must have Experience in Microsoft Azure is mandatory Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : Apache HadoopMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application specifications and user guides.- Engage in continuous learning to stay updated with the latest technologies and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with Apache Hadoop.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing scalable applications using big data technologies.- Familiarity with cloud platforms and services for application deployment. Additional Information:- The candidate should have minimum 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
2.0 - 5.0 years
4 - 4 Lacs
ahmedabad
Work from Office
We are looking for a motivated and organized Team Lead to oversee our Data Entry & Processing Operations team. In this role, you will be responsible for supervising daily workflows, ensuring data quality and turnaround time, and helping your team grow and perform at their best. Key Responsibilities Lead and manage the Data Processing team responsible for validating land/property-related records for client banks. Plan and allocate daily tasks and monitor the accuracy, speed, and quality of outputs. Conduct regular reviews to track performance, identify challenges, and resolve them in collaboration with other teams. Oversee the Quality Check (QC) process, providing constructive feedback and guidance to improve results. Prepare daily, weekly, and monthly reports on productivity and accuracy. Document operational procedures, recurring issues, and process improvements. Act as the point of contact for coordination with management and other internal departments. Key Skills & Competencies Strong team leadership and people management skills. Excellent organizational and time management abilities. Problem-solving mindset with attention to detail. Hands-on experience with QC processes is an advantage. Ability to identify process gaps and drive continuous improvement. Proficient in Gujarati; working knowledge of English is preferred. Basic digital literacy (Excel, dashboard tools, internal workflow tools). Qualifications Bachelors degree in any discipline (or equivalent work experience). 2 to 4 years of experience in data processing or operational roles. Minimum 1 year of experience in a leadership or supervisory position. Experience in BFSI, real estate, or document-based workflows is a plus. Proficiency in Microsoft Excel or Google Sheets. Basic understanding of image editing software (online tools). Good attention to detail. Ability to manage time effectively and work on multiple tasks.
Posted 2 weeks ago
5.0 - 9.0 years
7 - 11 Lacs
noida
Work from Office
We are seeking a talented and dynamic Assistant Manager to join our team who has a good exposure towards managing the projects relating to Information security domain and privacy protection from scratch Experience: 3+ years Key Objective And Responsibilities As an Assistant Manager, you will be entrusted with the following key responsibilities: Experience in leveraging industry standards and frameworks such as ISO 27001/2, ISO 22301, ISO 27018, NIST standards on Cyber Security, HITRUST, ISO 27701, etc , to assist clients in compliance and governance Design and implement data protection and privacy programs that cater to our clients' specific business needs, ensuring their sensitive information is well safeguarded Evaluate and assess our clients' data protection and privacy practices, offering valuable insights and actionable recommendations for continual improvement Provide guidance and support to clients in adhering to a complex web of national and international laws and regulations, including the EU General Data Protection Regulation (GDPR) and other privacy laws Data Audits and Assessments: Conducting regular data protection impact assessments (DPIAs) and audits to identify and mitigate privacy risks associated with data processing activities Conduct thorough audits of Privacy controls to monitor program effectiveness and compliance, ensuring data protection is at its optimal level Foster and maintain productive working relationships with client personnel, promoting effective collaboration and understanding of their specific needs Assist in preparing policies, reports, and schedules for clients and relevant stakeholders, ensuring clear communication and alignment with industry best practices Contribute to cybersecurity engagements, developing cybersecurity strategies, governance, risk, and compliance activities, and cybersecurity policies in line with ISO 27001 and ISO 27701 Perform Gap Assessments, Risk Assessments, ISMS Documentation, Internal Audits, and support during Certification Audits to strengthen overall security frameworks Utilize online tools to facilitate Incident Management and Data Subject Rights processes, ensuring efficient and timely responses to potential data incidents Demonstrate a strong commitment to adhering to workplace policies and procedures, maintaining the highest standards of professionalism and confidentiality Requirements To be considered for this role, the candidate must meet the following requirements: Hold relevant qualifications such as CIPP/E, CIPM, FIP, DCPLA, CDPO/IN, CDPO/P, ISO 27001 LA/LI, ISO 27701 LApreferred Minimum 3 years of related work experience; or a masters or MBA degree in business, computer science, information systems, engineering and/or data protection Possess a sound knowledge of fundamentals of information security systems Exhibit a good understanding of GDPR, CCPA, or other privacy laws Display competence in governance and reporting, as well as a strong grasp of cyber and privacy risks Showcase excellent communication skills, both written and verbal Proficiency in Microsoft Office Suite (Word, Excel, Power point)
Posted 2 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), AWS Architecture, Databricks Unified Data Analytics PlatformMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, addressing any challenges that arise, and providing guidance to team members to foster a productive work environment. You will also engage in strategic discussions to align project goals with organizational objectives, ensuring that the applications developed meet the highest standards of quality and functionality. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Databricks Unified Data Analytics Platform, AWS Architecture, Oracle Procedural Language Extensions to SQL (PLSQL).- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
chennai
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop and Apache Kafka.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms such as AWS or Azure.- Must have Experience in Microsoft Azure is mandatory Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
hyderabad
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop and Apache Kafka.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms such as AWS or Azure.- Must have Experience in Microsoft Azure is mandatory Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in code reviews to ensure best practices and quality standards are maintained. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Spark and data processing frameworks.- Strong understanding of data transformation and ETL processes.- Familiarity with cloud platforms such as AWS or Azure.- Experience in developing and deploying applications in a distributed environment. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop and Apache Kafka.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms such as AWS or Azure.- Must have Experience in Microsoft Azure is mandatory Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Palantir Foundry, MySQL, Python (Programming Language), PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:Develop and maintain robust data pipelines and workflows within the Palantir Foundry platform.Leverage Pyspark and Python for large-scale data processing, transformation, and integration across diverse data sources.Design, implement, and optimize advanced SQL queries for data extraction, manipulation, and analysis.Collaborate closely with business analysts, data scientists, and other engineers to translate business requirements into scalable data solutions.Ensure data quality, integrity, and security throughout the data lifecycle.Troubleshoot, debug, and optimize data workflows for performance and reliability.Document architecture, processes, and best practices for ongoing support and knowledge sharing. Required Skills & QualificationsPalantir Foundry:Proven experience building data pipelines, managing datasets, and deploying applications within Foundry. Specifics required are as follows:oPractical skills required to build and maintain production-grade data pipelines, data connections, and ontologiesoGeneral knowledge of platform capabilities and specific applications within the Foundry suite that are useful for performing the job of data engineeroDATA PIPELINE DEVELOPMENT IN FOUNDRYDevelop transforms on structured (tabular) and unstructured datasets in FoundryApply best practices when building data pipelinesoDATA PIPELINE MAINTENANCE IN FOUNDRYEffectively investigate and fix common issues in data pipelinesContribute to logic changes and performance improvements to transform pipelines feeding mission critical workflowsFamiliarity with recommended support structuresoDATA CONNECTION AND INTEGRATION IN FOUNDRYFamiliarity with architecture and capabilities of Data ConnectionSet up sources and syncs ingesting tabular data or raw files from external systems to FoundryoONTOLOGY DESIGN AND DEVELOPMENT IN FOUNDRYProvide data engineering context during ontology design and implement pipelines backing ontology objects and links based on application requirementsSpark/Pyspark:Strong expertise in distributed data processing and transformation using Spark/Pyspark.Python:Proficiency in Python for scripting, automation, and data wrangling tasks.SQL:Advanced skills in writing efficient SQL queries for complex data manipulation and reporting.Familiarity with data modeling, ETL processes, and best practices in data engineering.Experience working with large and complex data sets from multiple sources.Excellent problem-solving, communication, and collaboration skills. Additional Information:- The candidate should have minimum 3 years of experience in Palantir Foundry.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
pune
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop and Apache Kafka.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms such as AWS or Azure.- Must have Experience in Microsoft Azure is mandatory Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
pune
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Palantir Foundry, MySQL, Python (Programming Language), PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are optimized for performance and usability. You will also engage in problem-solving activities, providing support and enhancements to existing applications while ensuring that all development aligns with best practices and organizational standards. Roles & Responsibilities:Develop and maintain robust data pipelines and workflows within the Palantir Foundry platform.Leverage PySpark and Python for large-scale data processing, transformation, and integration across diverse data sources.Design, implement, and optimize advanced SQL queries for data extraction, manipulation, and analysis.Collaborate closely with business analysts, data scientists, and other engineers to translate business requirements into scalable data solutions.Ensure data quality, integrity, and security throughout the data lifecycle.Troubleshoot, debug, and optimize data workflows for performance and reliability.Document architecture, processes, and best practices for ongoing support and knowledge sharing. Professional & Technical Skills: Palantir Foundry:Proven experience building data pipelines, managing datasets, and deploying applications within Foundry. Specifics required are as follows:oPractical skills required to build and maintain production-grade data pipelines, data connections, and ontologiesoGeneral knowledge of platform capabilities and specific applications within the Foundry suite that are useful for performing the job of data engineeroDATA PIPELINE DEVELOPMENT IN FOUNDRY?Develop transforms on structured (tabular) and unstructured datasets in Foundry?Apply best practices when building data pipelinesoDATA PIPELINE MAINTENANCE IN FOUNDRY?Effectively investigate and fix common issues in data pipelines?Contribute to logic changes and performance improvements to transform pipelines feeding mission critical workflows?Familiarity with recommended support structuresoDATA CONNECTION AND INTEGRATION IN FOUNDRY?Familiarity with architecture and capabilities of Data Connection?Set up sources and syncs ingesting tabular data or raw files from external systems to FoundryoONTOLOGY DESIGN AND DEVELOPMENT IN FOUNDRY?Provide data engineering context during ontology design and implement pipelines backing ontology objects and links based on application requirementsSpark/PySpark:Strong expertise in distributed data processing and transformation using Spark/PySpark.Python:Proficiency in Python for scripting, automation, and data wrangling tasks.SQL:Advanced skills in writing efficient SQL queries for complex data manipulation and reporting.Familiarity with data modeling, ETL processes, and best practices in data engineering.Experience working with large and complex data sets from multiple sources.Excellent problem-solving, communication, and collaboration skills. Additional Information:- The candidate should have minimum 5 years of experience in Palantir Foundry.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
3.0 - 8.0 years
10 - 14 Lacs
chennai
Work from Office
About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also be responsible for troubleshooting issues and providing guidance to team members, fostering a collaborative environment that encourages innovation and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Hadoop and Apache Kafka.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms such as AWS or Azure.- Must have Experience in Microsoft Azure is mandatory Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
15.0 - 20.0 years
9 - 13 Lacs
navi mumbai
Work from Office
About The Role Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : AWS BigData Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Software Development Lead, you will be responsible for developing and configuring software systems, either end-to-end or for specific stages of the product lifecycle. Your typical day will involve collaborating with various teams to ensure that the software meets client requirements, applying your knowledge of technologies and methodologies to support projects effectively, and overseeing the implementation of solutions that enhance operational efficiency and product quality. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with strategic goals.- Knowledge of AWS services such as EC2, S3, RDS, Lambda, and CloudFormation.- Shift time will be 1 Pm to 10:30 PM Professional & Technical Skills: - Must Have Skills: Proficiency in AWS Big Data.- Strong understanding of data processing frameworks such as Apache Hadoop and Apache Spark.- Experience with cloud computing services and architectures.- Familiarity with data warehousing solutions and ETL processes.- Ability to design and implement scalable data pipelines. Additional Information:- The candidate should have minimum 5 years of experience in AWS BigData.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
ahmedabad
Work from Office
Roles and Responsibility : Collaborate with stakeholders to understand business requirements and data needs. Translate business requirements into scalable and efficient data engineering solutions. Design, develop, and maintain data pipelines using AWS serverless technologies. Implement data modeling techniques to optimize data storage and retrieval processes. Develop and deploy data processing and transformation frameworks for real-time and batch processing. Ensure data pipelines are scalable, reliable, and performant for large-scale data sizes. Implement data documentation and observability tools and practices to monitor...
Posted 2 weeks ago
4.0 - 7.0 years
9 - 16 Lacs
mumbai, delhi / ncr, bengaluru
Work from Office
About Company: Avisoft (https://avisoft.io/) is a Technology and IT services company based in Mohali and Jammu serving clients globally. We offer Product Engineering, IT Consultancy, Project Outsourcing and Staff Augmentation services. We partner with businesses to design and build Tech platforms from scratch, or to re-engineer and modernize their legacy systems. Our teams have expertise in Full Stack Technologies, REST API Servers, Blockchain, DevOps, Cloud Technologies, Data Engineering, and Test Automation. We are building next gen SaaS platforms for e-commerce and health-tech domains. About the Role: We are seeking a highly skilled Python Developer with a minimum of 4+ years of experience to join our team. In this role, you will play a key part in designing and implementing functional requirements, building efficient back-end features, managing testing and bug fixes, and providing mentorship to junior team members. Your expertise in Python development, knowledge of design patterns, and experience with testing frameworks such as pytest or unit test will be crucial in contributing to the success of our projects. In addition, you will leverage AI/ML technologies to enhance our data processing and application solutions. Responsibilities: Design and implement functional requirements for software applications. Develop robust and efficient back-end features using Python. Integrate AI/ML models and algorithms to solve complex business problems and enhance product functionality. Oversee testing processes, addressing and resolving bugs using frameworks like pytest or unittest. Prepare comprehensive technical documentation for reference. Mentor and coach junior team members, sharing your knowledge and experience. Actively participate in code reviews and discussions to ensure code quality. Implement software enhancements and suggest improvements for ongoing projects. Collaborate with cross-functional teams to integrate AI/ML models into existing applications. Skills: Proven experience as a Python Developer with a minimum of 4 years in a similar role. Excellent understanding and application of design patterns in software development. Strong experience with testing frameworks, preferably pytest or unittest. Familiarity with data processing frameworks such as pandas, pyspark, or similar. AI/ML experience, with expertise in integrating machine learning models into applications. Experience with cloud platforms, particularly Amazon Web Services (AWS), including AWS Glue and AWS Sagemaker for deploying models. Solid understanding of databases and SQL. Familiarity with building back-end solutions and APIs for machine learning models. Experience with AI/ML libraries such as TensorFlow, PyTorch, and Scikit-learn. Exceptional attention to detail in coding and problem-solving. Demonstrated leadership skills and ability to guide and motivate a team. Familiarity with React for front-end integration with AI/ML-driven back-end services. Bachelor's degree in Computer Science, Information Technology, or a related field. Skills : - Python Developer ,AI ,Machine Learning ,Backend Development ,AWS ,Sage maker ,AWS Glue ,Data Processing ,Pandas ,PySpark ,Design Patterns ,Unit Testing ,pytest ,unit test ,Cloud Platforms ,SQL ,NoSQL ,Model Deployment ,Scalable Applications ,Software Development, Python, AI/ML Integration, TensorFlow/PyTorch/Scikit-learn, Backend Development, AWS (Sagemaker, Glue), Pandas/PySpark, Design Patterns, Unit Testing (pytest, unittest), SQL, Model Deployment, Cloud Platforms (AWS, Google Cloud, Azure). Location : - Remote, Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune
Posted 2 weeks ago
3.0 - 7.0 years
9 - 12 Lacs
pune
Work from Office
We are looking for a Senior Data Scientist with expertise in Natural Language Processing (NLP), Computer Vision, Generative AI, and IoT data processing. The ideal candidate should have hands-on experience in deploying AI models on cloud (Azure) and edge environments, building data pipelines, and implementing CI/CD workflows. Strong proficiency in Python, SQL, TensorFlow, PyTorch, and cloud services (Azure ML, App Services, Data Factory) is essential. The role requires model monitoring, performance optimization, and IoT device integration for predictive analytics and real-time AI solutions.
Posted 2 weeks ago
5.0 - 7.0 years
4 - 8 Lacs
bengaluru
Work from Office
We are seeking a highly skilled Data Engineer with expertise in Java, PySpark, and big data technologies. The ideal candidate will have in-depth knowledge of Apache Spark, Python, and Java programming (Java 8 and above, including Lambda, Streams, Exception Handling, Collections, etc.). Responsibilities include developing data processing pipelines using PySpark, creating Spark jobs for data transformation and aggregation, and optimizing query performance using file formats like ORC, Parquet, and AVRO. Candidates must also have hands-on experience with Spring Core, Spring MVC, Spring Boot, REST APIs, and cloud services like AWS. This role involves designing scalable pipelines for batch and real-time analytics, performing data enrichment, and integrating with SQL databases.
Posted 2 weeks ago
3.0 - 8.0 years
12 - 14 Lacs
mumbai
Work from Office
Design, develop, and implement data solutions using Azure Data Stack components . Write and optimize advanced SQL queries for data extraction, transformation, and analysis. Develop data processing workflows and ETL processes using Python and PySpark.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |