Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Ahmedabad
On-site
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world's largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc's Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: You'll be working with many strategic engineering leaders within the company. You'll report directly to the Director of Data Engineering. You will help build out our Data Engineering team presence in India. You will work with a Global team. You'll be challenged with a lot of big data problems. About The Role: We are seeking a highly skilled Senior Data Engineer to join our dynamic Data Engineering team. The ideal candidate possesses 6-8 years of data engineering experience. An excellent candidate should have a solid understanding of Spark and SQL, and have data pipeline experience. Hired individuals will play a crucial role in helping to build out our data engineering team to support our strategic pipelines and optimize for reliability, efficiency, and performance. Additionally, Data Engineering serves as the gold standard for all other YipitData analyst teams, building and maintaining the core pipelines and tooling that power our products. This high-impact, high-visibility team is instrumental to the success of our rapidly growing business. This is a unique opportunity to be the first hire in this team, with the potential to build and lead the team as their responsibilities expand. This is a hybrid opportunity based in India. During training and onboarding, we will expect several hours of overlap with US working hours. Afterward, standard IST working hours are permitted with the exception of 1-2 days per week, when you will join meetings with the US team. As Our Senior Data Engineer You Will: Report directly to the Senior Manager of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques. Build and maintain end-to-end data pipelines. Help with setting best practices for our data modeling and pipeline builds. Create documentation, architecture diagrams, and other training materials. Become an expert at solving complex data pipeline issues using PySpark and SQL. Collaborate with stakeholders to incorporate business logic into our central pipelines. Deeply learn Databricks, Spark, and other ETL toolings developed internally. You Are Likely To Succeed If: You hold a Bachelor's or Master's degree in Computer Science, STEM, or a related technical discipline. You have 6+ years of experience as a Data Engineer or in other technical functions. You are excited about solving data challenges and learning new skills. You have a great understanding of working with data or building data pipelines. You are comfortable working with large-scale datasets using PySpark, Delta, and Databricks. You understand business needs and the rationale behind data transformations to ensure alignment with organizational goals and data strategy. You are eager to constantly learn new technologies. You are a self-starter who enjoys working collaboratively with stakeholders. You have exceptional verbal and written communication skills. Nice to have: Experience with Airflow, dbt, Snowflake, or equivalent. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Noida
On-site
5 - 7 Years 2 Openings Noida Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: PySpark . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
5.0 - 10.0 years
35 - 40 Lacs
Bengaluru
Work from Office
As a Senior Data Engineer, you will proactively design and implement data solutions that support our business needs while adhering to data protection and privacy standards. In addition to this, you would also be required to manage the technical delivery of the project, lead the overall development effort, and ensure timely and quality delivery. Responsibilities : Data Acquisition : Proactively design and implement processes for acquiring data from both internal systems and external data providers. Understand the various data types involved in the data lifecycle, including raw, curated, and lake data, to ensure effective data integration. SQL Development : Develop advanced SQL queries within database frameworks to produce semantic data layers that facilitate accurate reporting. This includes optimizing queries for performance and ensuring data quality. Linux Command Line : Utilize Linux command-line tools and functions, such as bash shell scripts, cron jobs, grep, and awk, to perform data processing tasks efficiently. This involves automating workflows and managing data pipelines. Data Protection : Ensure compliance with data protection and privacy requirements, including regulations like GDPR. This includes implementing best practices for data handling and maintaining the confidentiality of sensitive information. Documentation : Create and maintain clear documentation of designs and workflows using tools like Confluence and Visio. This ensures that stakeholders can easily communicate and understand technical specifications. API Integration and Data Formats : Collaborate with RESTful APIs and AWS services (such as S3, Glue, and Lambda) to facilitate seamless data integration and automation. Demonstrate proficiency in parsing and working with various data formats, including CSV and Parquet, to support diverse data processing needs. Key Requirements: 5+ years of experience as a Data Engineer , focusing on ETL development. 3+ years of experience in SQL and writing complex queries for data retrieval and manipulation. 3+ years of experience in Linux command-line and bash scripting. Familiarity with data modelling in analytical databases. Strong understanding of backend data structures, with experience collaborating with data engineers ( Teradata, Databricks, AWS S3 parquet/CSV ). Experience with RESTful APIs and AWS services like S3, Glue, and Lambda Experience using Confluence for tracking documentation. Strong communication and collaboration skills, with the ability to interact effectively with stakeholders at all levels. Ability to work independently and manage multiple tasks and priorities in a dynamic environment. Bachelors degree in Computer Science, Engineering, Information Technology, or a related field. Good to Have: Experience with Spark Understanding of data visualization tools, particularly Tableau. Knowledge of data clean room techniques and integration methodologies.
Posted 1 week ago
7.0 - 12.0 years
25 - 27 Lacs
Pune, Bengaluru
Hybrid
Role: Data Analyst / Senior Data Analyst Experience: 7+ years Location: Bangalore/ Pune Responsibilities: Define and obtain source data required to successfully deliver insights and use cases. Determine the data mapping required to join multiple data sets together across multiple sources Create methods to highlight and report data inconsistencies, allowing users to review and provide feedback on Propose suitable data migration sets to the relevant stakeholders Assist teams with processing the data migration sets as required Assist with the planning, tracking and coordination of the data migration team and with the migration run-book and the scope for each customer Role Requirements: Strong Data Analyst with Financial Services experience. Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context Knowledge of one or more of the following domains (including market data vendors): Party/Client Trade Settlements Payments Instrument and pricing Market and/or Credit Risk Demonstrate a continual desire to implement strategic” or “optimal” solutions and where possible, avoid workarounds or short term tactical solutions. Working with stakeholders to ensure that negative customer and business impacts are avoided Manage stakeholder expectations and ensure that robust communication and escalation mechanisms are in place across the project portfolio Good understanding of the control requirement surrounding data handling Experience/Skillset Must Have: Excellent analytical skills and commercial acumen. Proficient in Python, Pyspark and SQL. Good understanding of the control requirements surrounding data handling Experience of big data programmes preferable Strong verbal and written communication skills Strong self-starter with strong change delivery skills who enjoys the challenge of delivering change within tight deadlines Ability to manage multiple priorities Business analysis skills, defining and understanding requirements Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context Can write SQL queries and navigate data bases especially Hive, CMD, Putty, Note++ Enthusiastic and energetic problem solver to join an ambitious team Good knowledge of SDLC and formal Agile processes, a bias towards TDD and a willingness to test products as part of the delivery cycle Ability to communicate effectively in a multi-programme environment across a range of stakeholders Attention to detail Good to have: Preferable knowledge and experience in Data Quality & Governance For Spark Scala - should have working experience using scala (preferable) or java for spark For Senior DAs: proven track record of managing small delivery-focussed data teams
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Company Overview Fractal Analytics is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is one who empowers imagination with intelligence. Job Location: Bangalore/Gurgaon/Chennai/Coimbatore/Pune/Mumbai Exp Required: 2-5 Years required, Shift timings : 24*7 (2 week offs - roster based) Role Details Need someone with strong Data Engineering skillet to ensure production related activities are delivered as per SLA. Need to work on issues/requests, bug fixes, minor changes, co-ordinate with the development team in case of any issues, work on enhancements. Mandatory Technical Skills Strong experience in Azure Data Engineering services. Data Factory o Azure Databricks o Storage (ADLS Gen 2) Web app/App service Strong experience in Programming languages like Python, Pyspark, SQL Hands on experience in log analytics and Application Insights. Strong experience in Azure SQL. Mandatory Non-technical Skills Drive Incident/Problem resolution by assisting in key operational activities in terms of delivery, fixes, and supportability with operations team. Assist in change ticket review / approvals / planning and work with internal teams. Assist with projects being transitioned from project teams to Support teams. Escalation points for operation-related issues Experience working in ServiceNow is preferred. Attention to detail a must, with focus on quality and accuracy. Able to handle multiple tasks with appropriate priority and strong time management skills. Flexible about work content and enthusiastic to learn. Knowledge of service support, operation & design processes (ITIL) Ability to handle concurrent tasks with appropriate priority. Strong relationship skills to work with multiple stakeholders across organizational and business boundaries at all levels. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We are committed to creating an inclusive environment for all employees. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest!
Posted 1 week ago
4.0 - 9.0 years
11 - 17 Lacs
Bengaluru
Work from Office
Greetings from TSIT Digital !! This is with regard to an excellent opportunity with us and if you have that unique and unlimited passion for building world-class enterprise software products that turn into actionable intelligence, then we have the right opportunity for you and your career. This is an opportunity for Permanent Employment with TSIT Digital. What are we looking for: Data Engineer Experience: 4+ Year's Relevant Experience 2-5 Years Location:Bangalore Notice period: Immediately to 15 days Job Description: Work location-Manyata Tech Park, Bengaluru, Karnataka, India Work mode- Hybrid Model Client- Lowes Mandatory Skills: Data Engineer Scala/Python, SQL,Scripting Knowledge on BigQuery, Pyspark, Airflow,Serverless Cloud Native Service, Kafka Streaming If you are interested please share your updated CV:- kousalya.v@tsit.co.in
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySpark Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with PySpark, Scala. - Strong understanding of data integration and ETL processes. - Familiarity with cloud computing concepts and services. - Experience in application lifecycle management and agile methodologies. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Microsoft Fabric Data Engineer Location: Pune/Bangalore No. of Positions: 1 Experience: 3+ years overall, with 2+ year in Microsoft Fabric Employment Type: Full-time Job Overview: We are seeking a passionate and skilled Microsoft Fabric Data Engineer to design, build, and scale modern data applications on the Microsoft Fabric platform. This role is ideal for professionals with a solid Azure data engineering background who have recently transitioned into Microsoft Fabric and are eager to work on innovative, end-to-end data solutions. Key Responsibilities: Design and implement scalable data solutions using Microsoft Fabric, integrating ingestion, transformation, orchestration, and visualization workflows. Utilize the Microsoft Fabric Workload Development Kit (WDK) to build and embed custom applications within the Fabric environment. Build and manage Lakehouse and Data Warehouse architectures using OneLake, Synapse, and Data Factory (Fabric). Develop and maintain robust data pipelines for batch and near real-time processing. Collaborate with data scientists, analysts, and business teams to deliver performant and business-ready data solutions. Ensure data quality, governance, and security compliance across the Fabric environment. Design Power BI dashboards within Microsoft Fabric for business insights and decision-making. Required Skills & Experience: 3 years of total experience in data engineering, data integration, or BI development. Minimum 1 year of hands-on experience with Microsoft Fabric in a production or pilot setting. Strong hands-on experience with: Azure Data Factory (ADF) PySpark (for scalable data transformation) Azure Synapse Analytics SQL (complex joins, window functions, query optimization) Experience working with OneLake, Lakehouse, and Fabric-native data tools. Ability to design and optimize end-to-end data workflows within Microsoft Fabric. Preferred / Nice-to-Have: Experience with Power BI in Microsoft Fabric. Familiarity with Data Activator, Real-time Intelligence, or Dataflows Gen2. Exposure to CI/CD pipelines using Azure DevOps or GitHub Actions. Understanding of RBAC, data masking, and governance features in Fabric.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Specialty Development Practitioner GCP Data Engineer - 5+ Yrs Data Architecture, Data Warehousing, DataForm, Google Cloud Platform - Biq Query , Data Flow, Dataproc, Data Fusion, TERRAFORM , Tekton,Cloud SQL, POSTGRES, Airflow PySpark, Python , API Location: Chennai
Posted 1 week ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Mandatory Skills: PySpark. Experience: 3-5 Years.
Posted 1 week ago
0.0 years
8 - 18 Lacs
Chennai, Tamil Nadu
On-site
Hi, Please check the below role and let me know if you are interested. Share me your updated resume Position: Data Engineer Work Location: Kochi, Coimbatore, Chennai Work Type: Hybrid Onsite, 3 days onsite every week Experience-3+ Skills & Responsibilities: Data Engineering & Processing: Develop scalable data pipelines using Python and PySpark in Azure Databricks Work on data ingestion, cleansing, transformation, and preparation for analytics. Databricks & Azure Platform Skills: Hands-on experience with Databricks notebooks, Delta Lake, and working with structured and semi- structured data; MLFlow, use Azure Data Factory, Azure Data Lake, Azure SQL, and Cosmos DB for data processing and storage. Programming Skills: Strong proficiency in Python for data manipulation, automation, and orchestration tasks; familiarity with PySpark for distributed processing of large datasets. CI/CD & DevOps: Use Azure DevOps for version control, automated testing, and pipeline-based deployments; follow coding standards and peer review processes. Testing & Quality: Write and execute unit and validation tests for data pipelines; monitor job performance and troubleshoot data quality issues. Agile Collaboration: Participate in Agile ceremonies including stand-ups, sprint planning, and retrospectives; work closely with BAs, QAs, and architects to deliver high-quality solutions. Education: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹800,000.00 - ₹1,800,000.00 per year Schedule: Day shift Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Location: Chennai, Tamil Nadu (Required) Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles & Responsibilities Design end-to-end data code development using pyspark, python, SQL and Kafka leveraging Microsoft Fabric's capabilities. Requirements Hands-on experience with Microsoft Fabric, including Lakehouse, Data Factory, and Synapse. Strong expertise in PySpark and Python for large-scale data processing and transformation. Deep knowledge of Azure data services (ADLS Gen2, Azure Databricks, Synapse, ADF, Azure SQL, etc.). Experience in designing, implementing, and optimizing end-to-end data pipelines on Azure. Understanding of Azure infrastructure setup (networking, security, and access management) is good to have. Healthcare domain knowledge is a plus but not mandatory. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture. Let’s grow together.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Data Streaming Engineer - Experience 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering - Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 1 week ago
8.0 - 14.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Tips: We are hiring for Data Engineer Role. Experience:8-14 Years Locations: Pune, Chennai Notice Period: Immediate Joiners Responsibilities Mandatory Skills: Python, Pyspark, Databricks, Unity Catalog, DLT (Delta Live Tables), Databricks Workflows, Azure/AWS cloud, ADF/Orchestrator, CI/CD. Qualifications B.Tech, M.Tech, B.E., B.Com, B.Sc, B.A, MBA
Posted 1 week ago
0 years
0 Lacs
India
Remote
We are hiring a Data Engineer. If you are interested, please feel free to share your CV to SyedaRashna@lancesoft.com Job title: Data Engineer Location: India - Remote Duration: 6 Months Description: We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5 Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Data Engineer Remote 7 Months Contract + Extendable Experience: 6 Years We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5.Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Silverpush is at the forefront of AI-powered video advertising, delivering sophisticated video ad solutions that empower brands to achieve impactful campaigns within a privacy-centric environment. Operating across 30+ countries, we specialize in creating contextually relevant advertising experiences that drive genuine engagement and conversion. Silverpush's commitment to innovation and technological advancement enables us to navigate the evolving digital landscape, providing our partners with the tools necessary to connect with audiences on a global scale. We are dedicated to fostering a culture of creativity and excellence, driving the future of ad tech with integrity and foresight. For more information about Silverpush’s innovative advertising solutions, please visit www.silverpush.co. Responsibilities: ● Analyze complex datasets to identify trends, patterns, and correlations, and extract actionable insights that can inform strategic decisions. ● Design and build predictive models using statistical and machine learning techniques (e.g., regression, classification, XGBoost, clustering). ● Research and develop analyses and forecasting and optimization methods across ads performance, content performance modeling, and live experiments. ● Research and prototype using cutting-edge LLM technologies and generative AI to unlock new opportunities in personalization, targeting, and automation. Ideal Candidate Profile ● 3+ years of experience in Data Science, ideally in advertising or media-related domains. ● Degree in a quantitative discipline (e.g., Statistics, Computer Science, Mathematics, Masters in DS). ● Deep experience working with large-scale structured and unstructured data. ● Strong foundation in machine learning and statistical modeling. ● Familiar with building and deploying models in production (basic MLOps knowledge). ● Comfortable with NLP and computer vision, and interested in applying LLMs to real-world use cases. ● Excellent communication skills, with the ability to explain complex concepts to non-technical stakeholders. Technical Skills ● Languages & Tools: Python, PySpark, SQL ● ML Techniques: Regression, Classification, Clustering, Decision Trees, Random Forests, XGBoost, SVM ● LLM Tech: Familiarity with tools like OpenAI, Hugging Face, LangChain, and prompt engineering ● Data Infrastructure: ETL tools, Postgres, BigQuery/Snowflake, S3/GCP ● Statistical Analysis: A/B testing, experiment design, causal inference
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
we are hiring good GCP Data Engineers for Gurgaon Location. We are looking candidate should have strong experience in Bigdata, Pyspark, Python Or Java AND GCP, GCS, Bigquery, Dataflow, Dataproc, Pub sub, Storage. If you are good & strong on these expertise & can join us in 0-30 Days period. Please do share your resume at vaishali.tyagi@impetus.com Required Skill-Set Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers’ workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 6- 10 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 6-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 3- 6 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 3-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
As a Big Data Architect working on a contract basis for a renowned client, you will be responsible for utilizing your expertise in technologies such as Hadoop, NoSQL, Spark, PySpark, Spark Time Streaming, Elastic Search, Kafka, Scala/Java, and ETL platforms including HBase, Cassandra, and MongoDB. Your primary role will involve ensuring the completion of surveys and addressing any queries promptly. You will play a crucial part in conceptualizing action plans by engaging with clients, Delivery Managers, vertical delivery heads, and service delivery heads. Your responsibilities will also include driving account-wise tracking of action plans aimed at enhancing Customer Satisfaction (CSAT) across various projects. You will be involved in conducting Quarterly pulse surveys for selected accounts or projects to ensure periodic check-ins and feedback collection. Furthermore, you will provide support to the Account Leadership teams in tracking and managing client escalations effectively to ensure timely closure. With over 10 years of experience and a solid educational background in Any Graduation, you will contribute to the success of projects in a hybrid work mode. Immediate availability to join is essential for this role based in Hyderabad.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France