Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Roles and Responsibilities: You will work with the team to understand the requirements and design and develop algorithms engines rules using Python . You will use BiqQuery as the backend, you will utilize not only generic SQL/DB entities and operations but also BQ specific operations as well. You will be writing bash / powershell scripts. You will develop APIs using the Flask framework and implement data ingestion pipelines to handle large volumes of data from various upstream sources You will implement unit tests to ensure code reliability and maintainability. You will analyze system issues and debug/track/resolve them effectively.. Experience needed: 7+ years experience writing, debugging, and troubleshooting code in mainstream Python 7+ years of experience in writing PL/SQL stored procedures, functions and triggers 5+ years of experience in developing applications using web frameworks such as Flask. 5+ years experience deploying and releasing software using Jenkins and CI/CD pipelines. 3+ years of experience in data handling, management and ETL processes. 3+ years Experience with data extraction using API calls. 1+ Experience with big data processing solutions using BigQuery, PubSub, GCS and others. Proficiency in using source code control management systems like Github Any experience with Angular will be added advantage Any experience with Lookerstudio will be added advantage Open Date: Feb-07-2025
Posted 1 month ago
5.0 - 10.0 years
9 - 12 Lacs
Kolkata
Work from Office
Role: Proven experience as an AI Developer or similar role. Develop and optimize deep learning models to improve operational efficiency. Design and implement various neural network architectures tailored to specific tasks. Process and visualize complex data to extract actionable insights. Enhance natural language processing capabilities to streamline communication and automation. Deploy AI solutions that integrate seamlessly with existing systems and workflows. Ensure AI security and compliance with industry standards. Provide technical leadership and mentorship to junior team members, fostering best practices and continuous learning Collaborate with cross-functional teams, including data scientists, software engineers, product managers, and business stakeholders, to define AI solution objectives and deliver impactful results Qualifications .Requirements: BSc/BTech/BE degree in Computer Science, Engineering, Data Science, or a relevant field. 5+ years of experience in AI. Proficiency in Python and relevant AI frameworks (TensorFlow, PyTorch, scikit-learn, LangChain etc) Ability to incorporate AI models from the initial idea to execution. Proficiency in mathematical principles that underpins AI algorithms and theories. Specialized knowledge in generative AI (GenAI) and large language models (LLMs). Experience in selecting, enhancing and deploying deep learning models. Strong understanding of neural network architectures. Expertise in data processing and visualization. Adept in natural language processing and AI security. Applying expertise in neural network architectures, specifically for GenAI and LLM applications. Build and maintain robust data pipelines for large-scale data ingestion, preprocessing, feature engineering, and model training Concepts on RAG and Vector DB
Posted 1 month ago
6.0 - 11.0 years
5 - 8 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Data Engineer Bangalore Marathalli Hybrid Mode Requirements 6+ years(6 year) of professional software engineering mostly focused on the following: Developing ETL pipelines involving big data. Developing data processing\analytics applications primarily using PySpark. Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming. Clear understanding and ability to implement distributed storage, processing and scalable applications. Experience of working with SQL and NoSQL database. Ability to write and analyze SQL, HQL and other query languages for NoSQL databases. Proficiency is writing disitributed scalable data processing code using PySpark, Python and related libraries. Data Engineer AEP Comptency Experience of developing applications that consume the services exposed as ReST APIs. Special Consideration given for Experience of working with Container-orchestration systems like Kubernetes. Experience of working with any enterprise grade ETL tools. Experience knowledge with Adobe Experience Cloud solutions. Experience knowledge with Web Analytics or Digital Marketing . Experience knowledge with Google Cloud platforms. Experience knowledge with Data Science, ML/AI, R or Jupyter .
Posted 1 month ago
3.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
We are looking for an experienced data engineer to join our team. You will use various methods to transform raw data into useful data systems. For example, you ll create algorithms and conduct statistical analysis. Overall, you ll strive for efficiency by aligning data systems with business goals. Minimum of 3 years of experience as a data and API developer using Python/Nodejs, Azure Functions, Cosmos DB, Azure Event Hubs, Azure Data Lake Storage, Azure Storage Queues etc. Excellent technical, analytical and organizational skills. Effective written and verbal communication skills, including technical writing. Hands-on engineers who are curious about technology, should be able to quickly adapt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security etc. Hands on experience with event processing and pub-sub consumption patterns Good knowledge and exposure to Data Models, Databases like SQL Server, NoSQL Databases, Elastic Search, API management, etc. Design develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Working experience with standard API development protocols, API gateway and tokens Experience building and maintaining a data warehouse/ data lake in a production environment with efficient ETL design, implementation, and maintenance Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams
Posted 1 month ago
7.0 - 11.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Job Description We are looking for a skilled and experienced Data Engineer to join our team and contribute to enhancement work in the Advanced Master Data Management (AMDM) space. The ideal candidate will have strong hands-on experience with Databricks, SQL, and Python, and must possess practical expertise in Azure Cloud services. You will play a key role in designing, developing, and optimizing scalable data pipelines and solutions to support enterprise-wide data initiatives. Responsibilities Develop and maintain scalable ETL/data pipelines using Databricks, SQL, and Python on Azure Cloud. Design and implement data integration workflows across structured and unstructured sources within the AMDM domain. Collaborate with data architects and business teams to translate requirements into efficient data solutions. Ensure data quality, consistency, and integrity across systems. Monitor, troubleshoot, and optimize data workflows in Azure-based environments. Leverage Azure services like Azure Data Factory, Azure Data Lake, Azure Synapse, and Azure Key Vault as part of solution delivery. Qualifications 6+ years of hands-on experience in Data Engineering or ETL development. Strong proficiency in Databricks for distributed data processing and transformation. Advanced skills in SQL and Python for building and automating data workflows. Solid working experience with core Azure Data Services Experience with relational and non-relational database technologies (e.g., SQL Server, PostgreSQL, Oracle). Familiarity with Stibo or similar Master Data Management (MDM) tools.
Posted 1 month ago
8.0 - 13.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Not Applicable Specialism SAP Management Level Senior Associate Summary . Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purposeled and valuesdriven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . DevOps skills in Java, Maven, Jenkins, NodeJS, Docker, APIGEE, MongoDB, Kafka and Datapower Code frameworks/APIs on AWS using Java/python/Ruby/PHP SDKs Assist in code promotion process to production environments leveraging Jenkins, Maven, Ant other deployment tools Debug and resolve Jenkins, Project onboarding issues assist in CI creation for Dev Team Demonstrated ability to write programs using a highlevel programming language such Java, Ruby, Python Realtime, nearrealtime and batch data processing Selfservice reporting Realtime and adhoc data analytics Code Chef recipes/cookbooks in an Amazon Web Services (AWS) Public Cloud environment Programing data ingestion/processing in any of the scripting languages Mandatory skill sets Devops , Linux strong scripting experience CI/CD Containerization Docker Kubernete s Infrastructure Automation Preferred skill sets frameworks/APIs on AWS using Java/python/Ruby/PHP SDKs Knowledge on Any public cloud Years of experience required 5 to 8 Years Education qualification BE/ B.Tech /MBA /MCA Education Degrees/Field of Study required Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred Required Skills DevOps, Linux Optional Skills JavaScript Frameworks No
Posted 1 month ago
3.0 - 6.0 years
8 - 13 Lacs
Pune
Work from Office
Responsibilities: Develop and execute test scripts to validate data pipelines, transformations, and integrations. Formulate and maintain test strategies including smoke, performance, functional, and regression testing to ensure data processing and ETL jobs meet requirements. Collaborate with development teams to assess changes in data workflows and update test cases to preserve data integrity. Design and run tests for data validation, storage, and retrieval using Azure services like Data Lake, Synapse, and Data Factory, adhering to industry standards. Continuously enhance automated tests as new features are developed, ensuring timely delivery per defined quality standards. Participate in data reconciliation and verify Data Quality frameworks to maintain data accuracy, completeness, and consistency across the platform. Share knowledge and best practices by collaborating with business analysts and technology teams to document testing processes and findings. Communicate testing progress effectively with stakeholders, highlighting issues or blockers, and ensuring alignment with business objectives. Maintain a comprehensive understanding of the Azure Data Lake platforms data landscape to ensure thorough testing coverage. Skills Experience: 3-6 years of QA experience with a strong focus on Big Data testing, particularly in Data Lake environments on Azures cloud platform. Proficient in Azure Data Factory, Azure Synapse Analytics and Databricks for big data processing and scaled data quality checks. Proficiency in SQL, capable of writing and optimizing both simple and complex queries for data validation and testing purposes. Proficient in PySpark, with experience in data manipulation and transformation, and a demonstrated ability to write and execute test scripts for data processing and validation. Hands-on experience with Functional system integration testing in big data environments, ensuring seamless data flow and accuracy across multiple systems. Knowledge and ability to design and execute test cases in a behaviour-driven development environment. Fluency in Agile methodologies, with active participation in Scrum ceremonies and a strong understanding of Agile principles. Familiarity with tools like Jira, including experience with X-Ray or Jira Zephyr for defect management and test case management. Proven experience working on high-traffic and large-scale software products, ensuring data quality, reliability, and performance under demanding conditions.
Posted 1 month ago
8.0 - 10.0 years
7 - 11 Lacs
Chennai
Work from Office
We are looking for a highly skilled Senior Data Engineer with strong expertise in Data Warehousing Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python. Key Responsibilities: Design, develop, and maintain scalable data warehouse solutions. Write and optimise complex SQL queries for data extraction, transformation, and Reporting. Develop and automate data pipelines using Python. Work with AWS cloud services for data storage, processing, and analytics. Collaborate with cross-functional teams to provide data-driven insights and solutions. Ensure data integrity, security, and performance optimisation. Required Skills Experience: 8-10 years of experience in Data Warehousing Analytics. Strong proficiency in writing complex SQL queries with deep understanding of query optimization, stored procedures, and indexing. Hands-on experience with Python for data processing and automation. Experience working with AWS cloud services. Ability to work independently and collaborate with teams across different time zones. Good to Have: Experience in the SAS domain and understanding of financial data structures. Hands-on experience with reporting tools like Power BI or Tableau.
Posted 1 month ago
8.0 - 13.0 years
5 - 9 Lacs
Kolkata
Work from Office
Bachelors or Masters degree in Computer Science, Engineering, or a related field. Minimum of 8 years of experience in software development with a significant focus on cloud-based, multi-tenant architectures. Expertise in Python programming for building complex data analytics platforms. Profound knowledge of AWS or GCP cloud services, with a track record of implementing multi-tenant systems. Strong experience with SQL and NoSQL databases, including design and optimization for multi-tenancy. Proven ability to lead and manage software development teams, with excellent team-building skills. Experience with creating dashboards and BI solutions that support multi-tenant architectures. Familiarity with implementing service-based (microservices) software architectures. Exceptional problem-solving abilities and a strong grasp of software development best practices. Cloud platform certifications (AWS, GCP) are highly regarded. Experience with advanced analytics, machine learning, and AI integration in multi-tenant environments. Knowledge of front-end technologies for dashboard integration in a multi-tenant context. Understanding of containerization and orchestration technologies (e.g., Docker, Kubernetes) in a multi-tenant setup. Lead the architectural design of scalable multi-tenant software solutions on cloud platforms such as AWS or GCP. Develop and oversee the implementation of end-to-end data analytics applications, ensuring multi-tenancy and data isolation. Design and construct interactive, service-based dashboards and BI tools that cater to various tenant needs. Manage and mentor a team of software engineers, fostering a collaborative and high-performance culture. Drive the adoption of cloud services and frameworks for efficient multi-tenant application development. Ensure the integrity, confidentiality, and availability of tenant data across all services. Collaborate with stakeholders to define technical requirements and system architecture plans. Lead the team in Agile development practices and continuous improvement initiatives. Optimize application performance for large-scale data processing and analytics. Uphold and advance security and compliance standards within the multi-tenant environment. Energy Aspects in search of an experienced Lead Software Engineer who specializes in the design, development, and architecture of multi-tenant, service-based software systems with a focus on cloud technologies (AWS, GCP), Python, and data analytics. The successful candidate will be adept at building data-intensive analytical applications, creating insightful dashboards, and leveraging both SQL and NoSQL databases to drive business intelligence.
Posted 1 month ago
8.0 - 13.0 years
13 - 16 Lacs
Kochi
Work from Office
About KnowBe4 KnowBe4, the provider of the worlds largest security awareness training and simulated phishing platform, is used by tens of thousands of organizations around the globe. KnowBe4 enables organizations to manage the ongoing problem of social engineering by helping them train employees to make smarter security decisions, every day. Fortune has ranked us as a best place to work for women, for millennials, and in technology for four years in a row! We have been certified as a "Great Place To Work" in 8 countries, plus weve earned numerous other prestigious awards, including Glassdoors Best Places To Work. Our team values radical transparency, extreme ownership, and continuous professional development in a welcoming workplace that encourages all employees to be themselves. Whether working remotely or in-person, we strive to make every day fun and engaging; from team lunches to trivia competitions to local outings, there is always something exciting happening at KnowBe4. Please submit your resume in English. The individual in this role is responsible for leading software development teams to develop new and exciting products for KnowBe4 s customers, alongside other engineers in a fast-paced, agile development environment. Responsibilities: Leads a software team that develops software using the KnowBe4 Software Development Lifecycle and Agile Methodologies Recommends solutions to engineering problems Translates KnowBe4s strategic goals into operational plans Provides coordination across team boundaries Requirements: BS or equivalent plus 8 years technical experience MS or equivalent plus 3 years technical experience Ph.D. or equivalent plus 2 years technical experience 3 years experience managing software development teams Build, Manage and deliver high quality software product and features Ability to manage team of highly talented software engineers Should have extensive experience with building and integrating REST-based APIs with best practices of authentication authorization in enterprise-grade production environments. Experience with building apps and microservices on the AWS platform using Python Expert knowledge in at least one of the web framework technologies like Python Django/Flask/Rails/Express. Understanding and experience in building software systems following software design principles. Demonstrable knowledge of fundamental cloud concepts around multi-tenancy, scaling out, and serverless. Working experience in writing clean, unit-tested, and secure code. Working knowledge in relational databases such as MYSQL/POSTGRES and expertise in SQL. Knowledge of no-SQL databases such as Mongo and Elasticsearch is preferred. Experience with continuous delivery and integration pipelines: Docker/Gitlab/Terraform and other Automated deployment and testing tools. Should be open to learning new technologies programming languages as and when needed. Experience in working with APIs in the cybersecurity industry, and understanding the basics of the current security landscape (attack frameworks, security log processing, basic knowledge of AV/EDR/DLP/CASB, etc.) is a huge plus. Experience building scalable data processing pipelines is a plus. Our Fantastic Benefits We offer company-wide bonuses based on monthly sales targets, employee referral bonuses, adoption assistance, tuition reimbursement, certification reimbursement, certification completion bonuses, and a relaxed dress code - all in a modern, high-tech, and fun work environment. For more details about our benefits in each office location, please visit www.knowbe4.com / careers / benefits . Note: An applicant assessment and background check may be part of your hiring procedure. No recruitment agencies, please.
Posted 1 month ago
10.0 - 15.0 years
8 - 12 Lacs
Bengaluru
Work from Office
About Bazaarvoice At Bazaarvoice, we create smart shopping experiences. Through our expansive global network, product-passionate community enterprise technology, we connect thousands of brands and retailers with billions of consumers. Our solutions enable brands to connect with consumers and collect valuable user-generated content, at an unprecedented scale. This content achieves global reach by leveraging our extensive and ever-expanding retail, social search syndication network. And we make it easy for brands retailers to gain valuable business insights from real-time consumer feedback with intuitive tools and dashboards. The result is smarter shopping: loyal customers, increased sales, and improved products. The problem we are trying to solve : Brands and retailers struggle to make real connections with consumers. Its a challenge to deliver trustworthy and inspiring content in the moments that matter most during the discovery and purchase cycle. The resultTime and money spent on content that doesnt attract new consumers, convert them, or earn their long-term loyalty. Our brand promise : closing the gap between brands and consumers. Founded in 2005, Bazaarvoice is headquartered in Austin, Texas with offices in North America, Europe, Asia and Australia. It s official: Bazaarvoice is a Great Place to Work in the US , Australia , India , Lithuania, France, Germany and the UK ! How you ll make an impact: - Work on our JavaScript platform to help launch new clients and improve current partnerships and toolsets. - Building automated data processing solutions using Python and Bash scripting technologies. - Deploying and monitoring automated solutions via AWS. - Reviewing code changes made by others for accuracy prior to deployment. The must have skills that matter: - 10+ Years as a full-stack developer. - Fluent with JavaScript, HTML and CSS. - Experience with Python, Bash, Unix environments and Git. - Understanding of server-side web apps in any language. - Familiarity with AWS (or similar cloud technologies). Why You ll Love Working with Us - Work with cutting-edge technology in a collaborative, global team - Competitive salary + good benefits (insurance, annual leave, bonuses, referral rewards, and more). - We re Great Place to Work Certified (3 years in a row!). - Hybrid work model (3 days in office - Global Technology Park, Bellandur). Why Join Us Opportunity to work with cutting-edge technology and make a meaningful impact. Collaborative and innovative work environment. Competitive salary and comprehensive benefits package. We are a Great Place To Work certified (three years in a row). If this sounds like you, let s talk! #LI-Hybrid #LI-SR1 Why join Bazaarvoice Customer is key We see our own success through our customers outcomes. We approach every situation with a customer first mindset. Transparency Integrity Builds Trust We believe in the power of authentic feedback because it s in our DNA. We do the right thing when faced with hard choices. Transparency and trust accelerate our collective performance. Passionate Pursuit of Performance Our energy is contagious, because we hire for passion, drive curiosity. We love what we do, and because we re laser focused on our mission. Innovation over Imitation We seek to innovate as we are not content with the status quo. We embrace agility and experimentation as an advantage. Stronger Together We bring our whole selves to the mission and find value in diverse perspectives. We champion what s best for Bazaarvoice before individuals or teams. As a stronger company we build a stronger community. Commitment to diversity and inclusion Bazaarvoice provides equal employment opportunities (EEO) to all team members and applicants according to their experience, talent, and qualifications for the job without regard to race, color, national origin, religion, age, disability, sex (including pregnancy, gender stereotyping, and marital status), sexual orientation, gender identity, genetic information, military/veteran status, or any other category protected by federal, state, or local law in every location in which the company has facilities. Bazaarvoice believes that diversity and an inclusive company culture are key drivers of creativity, innovation and performance. Furthermore, a diverse workforce and the maintenance of an atmosphere that welcomes versatile perspectives will enhance our ability to fulfill our vision of creating the world s smartest network of consumers, brands, and retailers.
Posted 1 month ago
5.0 - 10.0 years
9 - 14 Lacs
Pune
Work from Office
DeepIntent is the healthcare advertising platform marketers trust to drive effective campaigns. Purpose-built for healthcare and privacy-safe, the DeepIntent platform unites media, identity, and clinical data to produce industry-defining health intelligence. Consistently proven to deliver real-world results based on real-time optimization, the DeepIntent platform powers more than 600 pharmaceutical brands and all the leading healthcare agencies to innovate, differentiate, and reach their audiences across all channels and devices. For more information, visit DeepIntent.com or find us on LinkedIn. What You ll Do: As a Sr. Data Scientist , you will work closely across DeepIntent Data Science teams located in New York City, India, and Bosnia. The role will focus on building predictive models, implement data drive solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed. Explore ways to to create better predictive models Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics Design and deploy new iterations of production-level code Contribute posts to our upcoming technical blog Who You Are: Bachelor s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred 5+ years of working experience as Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics Advanced proficiency in performing statistical analysis in Python, including relevant libraries is required Experience working with data processing , transformation and building model pipelines using tools such as spark , airflow , docker You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications) You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference ) You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing You can write production level code, work with Git repositories Active Kaggle participant Working experience with SQL Familiar with medical and healthcare data (medical claims, Rx, preferred) Conversant with cloud technologies such as AWS or Google Cloud DeepIntent is committed to bringing together individuals from different backgrounds and perspectives. We strive to create an inclusive environment where everyone can thrive, feel a sense of belonging, and do great work together. DeepIntent s commitment to providing equal employment opportunities extends to all aspects of employment, including job assignment, compensation, discipline and access to benefits and training.
Posted 1 month ago
1.0 - 6.0 years
4 - 9 Lacs
Chennai
Hybrid
Dear All, We are hiring for multiple Master Data Management (MDM) Consultants for multiple client projects across domains. Note : : Only one position requires SAP S/4HANA experience . Candidates without SAP experience are also encouraged to apply. Candidates with a notice period of 30 days or less will be prioritized. Those with longer notice periods may still be considered based on fit. Work Location & Mode: Role 1: Hybrid (Chennai based candidates preferred) Role 2: On-site at customer office (Chennai) Roles & Responsibilities: Support the migration of master data from multiple systems into a centralized ERP platform. Assist in data consolidation and standardization to maintain a single source of truth. Participate in data migration and integration activities. Create, maintain, and ensure the quality of Material, Customer, and Vendor Master Data. Perform data validation, cleansing, and quality checks to maintain high data integrity. Collaborate with cross-functional teams to gather and define business data requirements. Identify and resolve data quality issues and process inefficiencies. Conduct data analysis and prepare reports to support business decision-making. Support data governance policies and ensure compliance with data standards. Contribute to the development and enforcement of data management best practices. Soft Skills: Excellent communication and collaboration abilities. Strong analytical skills with attention to detail. Detail-oriented with a focus on accuracy and quality. Ability to manage multiple, changing priorities while working effectively in a team environment. Excellent problem-solving skills. Desired Candidate Profile: 1 to 6 years of experience in Master Data Management or related data-focused roles. Passion for working with data and driving data quality initiatives. Bachelors degree in any discipline (Engineering mandatory). Proficiency in Microsoft Excel; exposure to Power BI is an advantage. For SAP-specific roles: Experience with SAP MDM on SAP S/4HANA is preferred. Exposure to software like Java or Python or SQL is a plus. Familiarity with PLM tools like Teamcenter or Windchill is a plus. If you are passionate about data and excited to work on challenging data transformation projects, apply now and be part of a dynamic and growing team!
Posted 1 month ago
0.0 years
1 - 2 Lacs
Chennai
Work from Office
Greetings from Annexmed!!! Huge Openings for Data Analyst - Non-Voice Process (Freshers)- Chennai Desired Skill: * Typing Skill (Upper / Lower) * Qualification: Diploma or Any Degree * Passed Out Year 2022 To 2025. * Good Communication Skill. * Location: Candidates Must Reside Within 15Kms Radius Of The Office Location. Interview Time : 11:00AM to 5:00PM Interview Day: Monday To Friday. * Thursday Holiday (May -01-2025). Contact : Kamali HR 8939711311 Shift : Mid Shift
Posted 1 month ago
7.0 - 11.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Were Hitachi Digital Services, a global digital solutions and transformation business with a bold vision of our worlds potential Were people-centric and here to power good Every day, we future-proof urban spaces, conserve natural resources, protect rainforests, and save lives This is a world where innovation, technology, and deep expertise come together to take our company and customers from whats now to whats next We make it happen through the power of acceleration, Imagine the sheer breadth of talent it takes to bring a better tomorrow closer to today We dont expect you to fitevery requirement your life experience, character, perspective, and passion for achieving great things in the world are equally as important to us, The Team Were a leader in cutting-edge innovation, the transformative power of cloud technology, and converged and hyperconverged solutions Our mission is to empower clients to securely store, manage, and modernize their digital core, unlocking valuable insights and driving data-driven value, This strong, diverse, and collaborative group of technology professionals collaborate with teams to support our customers as they store, enrich, activate, and monetise their data, brining value to every line of their business, The Role We are seeking an experienced Data Architect with expertise in Workday Reporting and data automation The ideal candidate will have 10-12 years of experience, with a strong background in data architecture, reporting, and process automation, Key Responsibilities Workday Reporting Expertise Design and develop complex Workday reports (Advanced, Composite, and Matrix reports), Deliver data-driven insights using Workday's reporting tools, Ensure the integrity and alignment of reporting solutions with organizational goals, Data Architecture Create and implement robust data architecture frameworks, Manage seamless end-to-end data flows and system integrations, Optimize data storage, retrieval, and transformation processes for performance and scalability, Automation and Process Optimization Develop automation strategies for repetitive tasks using tools and scripts, Innovate data automation solutions to minimize manual efforts, Maintain quality, consistency, and timeliness in automated processes, Stakeholder Collaboration Partner with HR, IT, and business teams to understand reporting and data needs, Serve as a subject matter expert in Workday Reporting and data automation, Lead workshops and training sessions to enhance team understanding of reporting tools and processes, Continuous Improvement Identify and implement opportunities to improve reporting and data processes, Stay updated on emerging trends in data architecture and Workday technologies, Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results We want you to be you, with all the ideas, lived experience, and fresh perspective that brings We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team, How We Look After You We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing Were also champions of life balance and offer flexible arrangements that work for you (role and location dependent) Were always looking for new ways of working that bring out our best, which leads to unexpected ideas So here, youll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with, Were proud to say were an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success,
Posted 1 month ago
2.0 - 5.0 years
4 - 7 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
We are seeking a skilled Data Scientist with 2 to 5 years of experience, specializing in Machine Learning, PySpark, and Data bricks, with a proven track record in long-range demand and sales forecasting. This role is crucial for the development and implementation of an automotive OEM\u2019s next-generation Intelligent Forecast Application. The position will involve building, optimizing, and deploying large-scale machine learning models for complex, long-term forecasting challenges using distributed computing frameworks, specifically PySpark on the Data bricks platform. The work will directly support strategic decision-making across the automotive value chain, including areas like long-term demand planning, production scheduling, and inventory optimization. The ideal candidate will have hands-on experience developing and deploying ML models for forecasting, particularly long-range predictions, in a production environment using PySpark and Data bricks. This role requires strong technical skills in machine learning, big data processing, and time series forecasting, combined with the ability to work effectively within a technical team to deliver robust and scalable long-range forecasting solutions. Role & Responsibilities: Machine Learning Model Development & Implementation for Long-Range Forecasting: Design, develop, and implement scalable and accurate machine learning models specifically for long-range demand and sales forecasting challenges. Data Processing and Feature Engineering with PySpark: Build and optimize large-scale data pipelines for ingesting, cleaning, transforming, and engineering features relevant to long-range forecasting from diverse, complex automotive datasets using PySpark on Data bricks. Deployment and MLOps on Data bricks: Develop and implement robust code for model training, inference, and deployment of long-range forecasting models directly within the Data bricks platform. Performance Evaluation & Optimization: Evaluate long-range forecasting model performance using relevant metrics (e.g., MAE, RMSE, MAPE, considering metrics suitable for longer horizons) and optimize models and data processing pipelines for improved accuracy and efficiency within the PySpark/Data bricks ecosystem. Work effectively as part of a technical team, collaborating with other data scientists, data engineers, and software developers to integrate ML long-range forecasting solutions into the broader forecasting application built on Data bricks. Communicate technical details and forecasting results effectively within the technical team. Requirements Bachelors or Masters degree in Data Science, Computer Science, Statistics, Applied Mathematics, or a closely related quantitative field. 2 to 5 years of hands-on experience in a Data Scientist or Machine Learning Engineer role. Proven experience developing and deploying machine learning models in a production environment. Demonstrated experience in long-range demand and sales forecasting. Significant hands-on experience with PySpark for large-scale data processing and machine learning. Extensive practical experience working with the Data bricks platform, including notebooks, jobs, and ML capabilities Expert proficiency in PySpark. Expert proficiency in the Data bricks platform. Strong proficiency in Python and SQL. Experience with machine learning libraries compatible with PySpark (e.g., MLlib, or integrating other libraries). Experience with advanced time series forecasting techniques and their implementation. Experience with distributed computing concepts and optimization techniques relevant to PySpark. Hands-on experience with a major cloud provider (Azure, AWS, or GCP) in the context of using Data bricks. Familiarity with MLOps concepts and tools used in a Databricks environment. Experience with data visualization tools. Analytical skills with a deep understanding of machine learning algorithms and their application to forecasting. Ability to troubleshoot and solve complex technical problems related to big data and machine learning workflows. Preferred / Good to have : Experience with specific long-range forecasting methodologies and libraries used in a distributed environment. Experience with real-time or streaming data processing using PySpark for near-term forecasting components that might complement long-range models. Familiarity with automotive data types relevant to long-range forecasting (e.g., economic indicators affecting car sales, long-term market trends). Experience with distributed version control systems (e.g., Git). Knowledge of agile development methodologies. Preferred Location is Kolkata + Should be open to travel to Jaipur & Bangalore.
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Pune
Work from Office
Job Description Are You Ready to Make It Happen at Mondel z International? Join our Mission to Lead the Future of Snacking. Make It With Pride. Together with analytics team leaders you will support our business with excellent data models to uncover trends that can drive long-term business results. How you will contribute You will: Execute the business analytics agenda in conjunction with analytics team leaders Work with best-in-class external partners who leverage analytics tools and processes Use models/algorithms to uncover signals/patterns and trends to drive long-term business performance Execute the business analytics agenda using a methodical approach that conveys to stakeholders what business analytics will deliver What you will bring A desire to drive your future and accelerate your career and the following experience and knowledge: Using data analysis to make recommendations to analytic leaders Understanding in best-in-class analytics practices Knowledge of Indicators (KPIs) and scorecards Knowledge of BI tools like Tableau, Excel, Alteryx, R, Python, etc. is a plus More about this role You will be part of Mondel z biggest Analytics Team, delivering world class analytical solutions showcasing global level business impact using sophisticated tools and technologies. You will be driving Visualization and Data Processing capabilities to next level by delivering business solutions impacting business on day-to-day productivity. Build trust and credibility with different stakeholders to achieve common organizational goals and targets Develop & scale up global solutions under advanced analytics and reporting Domain for Mondelez Supply Chain. Drive automation by performing best in class technology integration in Mondelez (SAP, SQL, GCP) Build best in class interactive analytics tools with superior user experience by leveraging PowerBI /Dash Python/JS (react) Develop business solutions which are scalable, automated, self-sustainable and interactive with end users Collaborate with external partners to deliver strategic projects on time and right first time. Develop & execute strategy to automate KPI all existing dashboards and dataflows. Develop custom models/algorithms to uncover signals/patterns and trends to drive long-term business performance Execute the business analytics program agenda using a methodical approach that conveys to stakeholders what business analytics can deliver Sustain existing reports and dashboard ensuring timely update and no downtime. What you need to know about this position: What extra ingredients you will bring: Education / Certifications: Bachelor s or master s degree, in a quantitative field such as Statistics, Applied Mathematics, Engineering, Data Science Job specific requirements: Demonstrated experience in applying analytical techniques to solve business problems Proven experience in analytics and reporting projects with Cross-Functional and Global stakeholders Experience in Visualization Tools (Tableau/Power BI/Spotfire) Experience in Data Management and Processing Tools (Talend/Alteryx/R/Prep/SQL) Experience in web app development tools and projects may be advantageous (PowerBI / Dash / React / JS) Experience in Statistical analytical tools and projects may be advantageous Experience in using SAP may be advantageous Extensive experience in bringing data together from multiple sources, bring out insights & showcase in an easy-to-understand visualization Experience in FMCG/Food Products/Supply Chain Industry. Good communication skills Total relevant experience of 3-5 years Travel requirements: Work schedule: Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary Job Type Regular Analytics & Modelling Analytics & Data Science
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Why Lytx?: As our MLOps Engineer you will join our Applied Machine Learning Team who develops machine learning and computer vision algorithms to monitor and assess the state of drivers and their environments to identify risk and improve safety for our clients. You will contribute to all aspects of the development cycle to optimize workflows, dataset generation, model performance and code efficiency to help enhance and differentiate us as the leader in the Video Safety and Telematics industry. If this sounds like you, we encourage you to apply! What Youll Do: Build and maintain cloud deployment of ML models and surrounding infrastructure Contribute to infrastructure and process improvements for data collection, labeling, model development and deployment Design and implement R&D data engineering solutions for delivery of ML model value from device to cloud, including message payload design, data ingest and database architecture Help prepare and automate builds for device model deployment Assist on projects led by other team members via data processing, programming, monitoring of production applications, etc. Other duties as assigned. What Youll Need: Bachelor s degree in Computer Science or equivalent experience 4 to 6 years of experience with a Strong background in MLOps, Python, GNU/Linux CLI Versatile and adaptable engineer who can address evolving needs of team. Strong understanding of data engineering principles and architecture Knowledge of relational database modeling and integration. Experience with NoSQL is helpful. Ability to manage cloud resources and technologies within AWS, including Sagemaker, EC2 and S3 Experience with software automation tools, e.g., Airflow, Ansible, Terraform, Jenkins Experience with automated unit testing and regression testing methodologies Familiar with Linux software build toolchains and patterns. (e.g., Make, gcc) Experience with source control and tracking (git) Strong teammate who enjoys working in a collaborative, fast-paced team-focused environment Innovation Lives Here You go all in no matter what you do, and so do we. At Lytx, we re powered by cutting-edge technology and Happy People. You want your work to make a positive impact in the world, and that s what we do. Join our diverse team of hungry, humble and capable people united to make a difference. Together, we help save lives on our roadways. Find out how good it feels to be a part of an inclusive, collaborative team. We re committed to delivering an environment where everyone feels valued, included and supported to do their best work and share their voices. Lytx, Inc. is proud to be an equal opportunity/affirmative action employer and maintains a drug-free workplace. We re committed to attracting, retaining and maximizing the performance of a diverse and inclusive workforce. EOE/M/F/Disabled/Vet.
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Bengaluru
Work from Office
KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focussed and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. The person will work on a variety of projects in a highly collaborative, fast-paced environment. The person will be responsible for software development activities of KPMG, India. Part of the development team, he/she will work on the full life cycle of the process, develop code and unit testing. He/she will work closely with Technical Architect, Business Analyst, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Additionally, the person will ensure that all development practices are in compliance with KPMG s best practices policies and procedures. This role requires quick ramp up on new technologies whenever required. Bachelor s or Master s degree in Computer Science, Information Technology, or a related field. . Role : Azure Data Engineer Location: Bangalore Experience: 6 to 8 years Data Management : Design, implement, and manage data solutions on the Microsoft Azure cloud platform. Data Integration : Develop and maintain data pipelines, ensuring efficient data extraction, transformation, and loading (ETL) processes using Azure Data Factory. Data Storage : Work with various Azure data storage solutions like Azure SQL Database, Azure Data Lake Storage, and Azure Cosmos DB. Big Data Processing : Utilize big data technologies such as Azure Databricks and Apache Spark to handle and analyze large datasets. Data Architecture : Design and optimize data models and architectures to meet business requirements. Performance Monitoring : Monitor and optimize the performance of data systems and pipelines. Collaboration : Collaborate with data scientists, analysts, and other stakeholders to support data-driven decision-making. Security and Compliance : Ensure data solutions comply with security and regulatory requirements. Technical Skills : Proficiency in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data tools. Analytical Skills : Strong analytical and problem-solving skills. Communication : Excellent communication and teamwork skills. Certifications : Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate are a plus.
Posted 1 month ago
2.0 - 7.0 years
22 - 25 Lacs
Bengaluru
Work from Office
The Amazon Search team creates customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, Amazon Search services go to work. we'design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our Search Relevance team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide. Amazon has grown rapidly and will continue to do so in foreseeable future. Providing a high quality search experience is a unique challenge as Amazon expands to new customers, countries, categories, and product lines. We are seeking software engineer to join the Relevance India team. This team s charter is to increase the pace at which Amazon expands and improve the search experience at launch. In practice, we aim to create infrastructure and build innovating solutions that reduces the time and effort needed for expansions and improve the search experience on the day of the launch. - Design, develop, and implement production level code that serves search requests. - Own the full development cycle: design, development, impact assessment, A/B testing (including interpretation of results) and production deployment. - Design and apply data driven and machine learning techniques to provide optimal ranking. - Develop new ranking features and techniques building upon the latest results from the research community. - Collaborate with other engineers and scientists within to find technical solutions to complex design problems. - Participate in aspects of the RD process, from experimenting with new ideas to exploring new techniques. - Get exposure to large scale use of various AWS components such EC2, S3, EMR, SQS, SNS, etc - Take ownership. Understand the needs of various search teams, distil those into coherent projects, and implement them with an eye on long-term impact. - Be a leader. Use your expertise to set a high bar for the team, mentor team members, set the tone for how to take on and deliver on large impossible-sounding projects. - Be curious. You will work alongside systems engineers, machine learning scientists, and data analysts. Your effectiveness and impact will depend on discussing problems with and learning from them. - 2+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language - 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience - Bachelors degree in computer science or equivalent - Solid understanding of fundamental algorithms, data structures, system design, and complexity analysis. - Experience with web-scale data processing using Spark or similar technologies. - Experience with various components of a Search or Ads pipeline such as Query Understanding, Matching, and Relevance is a plus. - Ability to discuss complex topics with both technical and non-technical audiences.
Posted 1 month ago
5.0 - 8.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Developer with hands on design and developer experience to build robust APIs and services using Java and Spring Boot, coupled with hands-on experience in data processing. Has knowledge and experience to design and implement scalable On Prem / Cloud solutions that efficiently manage and leverage large datasets. Proficient in Java / Spring Boot with demonstrated ability to integrate with different databases and other APIs and services while ensuring security and best practices are followe'd throughout the development lifecycle. Responsibilities Design, develop, and maintain API s using Java and Spring Boot and ensure efficient data exchange between applications. Implement API security measures including authentication, authorization, and rate limiting. Document API specifications and maintain API documentation for internal and external users. Develop integration with different data sources and other APIs / Web Services Develop integrations with IBM MQ and Kafka Develop / Maintain CI/CD pipelines Do performance evaluation and application tuning Monitor and troubleshoot application for stability and performance Required Skills Experience: 5 - 8 Years of experience Programming Languages: Proficiency in Java. Web Development: Experience with SOAP and RESTful services. Database Management: Strong knowledge of SQL (Oracle). Version Control: Expertise in using version control systems like Git. CI/CD: Familiarity with CI/CD tools such as GitLab CI and Jenkins. Containerization Orchestration: Experience with Docker and OpenShift. Messaging Queues: Knowledge of IBM MQ and Apache Kafka. Cloud Services: Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Desired Skills Analytical Thinking: Ability to break down complex problems and devise efficient solutions. Debugging: Skilled in identifying and fixing bugs in code and systems. Algorithm Design: Proficiency in designing and optimizing algorithms. Leadership: Proven leadership skills with experience mentoring junior engineers. Communication: Strong verbal and written communication skills. Teamwork: Ability to collaborate effectively with cross-functional teams. Time Management: Competence in managing time and meeting project deadlines. Education bachelors degree in Computer Science, Software Engineering, or related field. A Masters degree is a plus. Certifications: Relevant certifications in AWS a plus
Posted 1 month ago
0.0 - 5.0 years
11 - 13 Lacs
Hyderabad
Work from Office
As Data Engineer, you will work alongside data scientists and domain experts to enable teams to answer scientific questions using multi-modal data on the data42 platform. You will be involved in gathering use-case requirements, performing data engineering activities, building ETL processes/data pipelines in quick iterations to deliver data ready for analysis. You will integrate data engineering best practices and data quality checks and seek to continuously optimize efficiency. Your responsibilities will include, but are not limited to: Collaborates with domain experts, data scientists and other stakeholders to fulfil use-case specific data needs. Designs, develops, tests, and maintains ETL processes/data pipelines to extract, prepare and iterate data for analysis in close alignment with TA / DA scientific leads and data scientists. Implements and maintains data checks to ensure accurate and high quality-data in close collaboration with domain experts. Identifies and rectifies data inconsistencies and irregularities. Promotes culture of transparency and communication regarding data modifications and lineage to all stakeholders. Implements and advocates for data engineering best practices, ensuring ETL processes/data pipelines are efficient, we'll-documented and we'll-tested. Plays a role in knowledge sharing across data42 and wider data engineering community at Novartis. Ensures compliance with Security and Governance Principles. Minimum Requirements: bachelors degree in computer science or other quantitative field (Mathematics, Statistics, Physics, Engineering, etc) or equivalent practical experience. Proven experience as a data engineer, data wrangler or a similar role. Exceptional programming skills with expertise in Python, R and Spark. Experience and familiarity with a variety of data types, including but not limited to images, tabular, unstructured, and text. Experience in scalable data processing engines, data ingestion, extraction and modeling. Proficient knowledge in statistics, with an ability to assess data quality, errors, inconsistencies, etc Good knowledge of data engineering best practices Excellent communication and stakeholder management skills. Demonstrated ability to work independently and as part of global Agile teams. Desirable additional skills in two or more of the following areas: Hands on experience on Palantir Foundry (Code Repository, Code Workbook, Contour, Data Lineage, etc) Knowledge of CDISC data standards (SDTM, ADaM) Experience using AI (eg: GenAI/LLMs) for data wrangling. Experience with pooling of clinical trial data. High-level understanding of the drug discovery and development process.
Posted 1 month ago
0.0 - 4.0 years
18 - 19 Lacs
Gurugram
Work from Office
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic we'll-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. Join Team Amex and lets lead the way together. With a focus on digitization, innovation, and analytics, the Enterprise Digital teams creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as we'll as long-term transformation of our business. A unique aspect of Enterprise Digital Teams is the integration of diverse skills across all its remit. Enterprise Digital Teams has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Experimentation Analytics (EDEA) leads the Enterprise Product Analytics and Experimentation charter for Brand Performance Marketing and Digital Acquisition Membership experiences as we'll as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels evolve our digital experiences with actionable insights analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. About this Role: This role will report to the Manager of International Acquisition experience analytics team within Enterprise Digital Experimentation Analytics (EDEA) and will be based in Gurgaon. The candidate will be responsible for delivery of highly impactful analytics to optimize our Digital Acquisition Experiences across International Markets (Shop, Apply, GO2 etc) Deliver strategic analytics focused on Digital Acquisition experiences across International Markets aimed at optimizing our Customer experiences Define and build key KPIs to monitor the acquisition journey performance and success Support the development of new products and capabilities Deliver read out of experiments uncovering insights and learnings that can be utilized to further optimize the customer journey Gain deep functional understanding of the enterprise-wide product capabilities and associated platforms over time and ensure analytical insights are relevant and actionable Power in-depth strategic analysis and provide analytical and decision support by mining digital activity data along with AXP closed loop data Minimum Qualifications Advanced degree in a quantitative field (eg Finance, Engineering, Mathematics, Computer Science) Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. Experience in large data processing and handling, understanding in data science is a plus. Ability to work in a dynamic, cross-functional environment, with strong attention to detail. Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner. Basic knowledge of statistical techniques for experimentation hypothesis testing, regression, t-test, chi-square test.
Posted 1 month ago
3.0 - 7.0 years
15 - 19 Lacs
Bengaluru
Work from Office
We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark , and SQL, to join our data team. you'll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities Design and implement ETL/ELT pipelines using Databricks and PySpark . Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop highperformance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong handson skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt . Exposure to streaming data and realtime processing. Knowledge of DevOps practices for data engineering. Mandatory skill sets Databricks Preferred skill sets Databricks Education qualification BE/BTECH, ME/MTECH, MBA, MCA Education Degrees/Field of Study required Master Degree Computer Applications, Master of Business Administration, Bachelor of Engineering, Master of Engineering, Bachelor of Technology Degrees/Field of Study preferred Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling
Posted 1 month ago
4.0 - 8.0 years
11 - 16 Lacs
Bengaluru
Work from Office
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and costeffectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark , Scala, and Apache Spark. Handson experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (eg, SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (eg, Hadoop, Hive, HBase) is a plus. Mandatory skill set s Spark, Pyspark , Azure Education qualification B.Tech / M.Tech / MBA / MCA Education Degrees/Field of Study required Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.
These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.
The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.
A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.
In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.
As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane