Home
Jobs

1517 Data Processing Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 4.0 years

1 - 2 Lacs

Noida, New Delhi, Delhi / NCR

Work from Office

Naukri logo

We are seeking a motivated and detail-oriented Ecommerce Executive to join our team. Having good communication skills is a must. This role is perfect for freshers or individuals with 0-4 years of experience who are passionate about the e Commerce industry. As an Ecommerce Executive, you will support higher to mid-level management across various functions and play a crucial role in driving our online business growth. Requirements: Good communication skills Knowledge of Excel Good typing speed Things You Will Learn: Fundamentals of eCommerce: Gain a strong understanding of the eCommerce landscape and its operations. Marketplace Exposure: Learn how to manage brands across various marketplaces like Amazon, Flipkart, Jiomart, Nykaa, Myntra, Swiggy, and Blinkit. Reporting: Develop skills in using tools to create detailed reports. Analysis of ecommerce data: Learn to study and analyze different data point from online marketplaces to drive business decisions. Learning from Industry Experts with 8-14 Years of Experience. Roles : Assist in managing the day-to-day operations of our ecommerce platforms. Support higher to mid-level management in executing ecommerce strategies. Maintain and update product listings on various marketplaces. Monitor and report on sales performance and other key metrics. Coordinate with different teams to ensure seamless order processing and fulfillment. Handle customer queries and resolve any issues promptly. Analyze market trend and competitor activities to identify opportunities for growth. Contribute to the optimization of the user experience on our ecommerce sites.

Posted 3 days ago

Apply

3.0 - 9.0 years

17 - 19 Lacs

Hyderabad

Work from Office

Naukri logo

Summary Job Description Summary -Provide analytical support to Novartis key stakeholders to support decision-making processes. -Support and facilitate data enabled decision making for Novartis internal customers by providing and communicating qualitative and quantitative analytics. - Generate reports that supervise product metrics, progress, and KPIs. -This role requires a blend of business insight and technical understanding, enabling you to collaborate with brand teams, marketing teams and all functions to maximize value. -Technical requirements - SQL, Dataiku, Python is a plus About the Role Key Responsibility: Solid understanding of multiple datasets (e. g. LAAD, Xponent, DDD) and managing and coordinating data sets from databases to find patterns and trends. Redefining these complex and granular data into actionable insights. Responsible for standard and ad-hoc extracts/reports across multiple primary and secondary data sources. Responsible for tracking ongoing outcomes reports and manage priorities for upcoming reports. Sharing findings with partners with reports and presentations on a timely basis. Putting together specifications to extract/transform data into required formats for different analytical elements using programming languages like SQL or other data processing tools. Build the foundation for more sophisticated approaches to APLD analysis and advanced analytics wherever it is required and beneficial. Establish and maintain positive relationships with key functional partners. Essential Requirements : - Ability to work independently and as an integral member of the team and Attention to detail and quality focused, good interpersonal and communication skills, influence, negotiation and tact skills, innovative, and collaborative behaviors and can-do orientation. - Curiosity and strong analytical thinking, verbal and written communication skills and exposure to working in multifunctional/cultural environment. - Good communication and interpersonal skills. Conceptual, analytical & tactical thinking, strategic thought process. - Ability to multi-task, work in a demanding distributed team environment, work under tight deadlines. Develop and maintain strong individual performance. Desirable Requirements : - Masters or Bachelor s in STEM - At least 3+ years of experience in data modeling and reporting solutions development and hands-on experience of APLD and US national and subnational datasets and ability to lead teams functionally. - Technical abilities: Excel, SQL or Dataiku, and PowerPoint is vital. Knowledge of statistical modeling or ML is a plus. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients lives. Ready to create a brighter future together? https://www. novartis. com / about / strategy / people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork. novartis. com/network Benefits and Rewards: Read our handbook to learn about all the ways we ll help you thrive personally and professionally:

Posted 3 days ago

Apply

13.0 - 17.0 years

37 - 45 Lacs

Noida

Work from Office

Naukri logo

Job Responsibilities : * Involvement in solution planning * Convert business specifications to technical specifications. * Write clean codes and review codes of the project team members (as applicable) * Adhere to Agile Delivery model. * Able to solve L3 application related issues. * Should be able to scale up on new technologies. * Should be able document project artifacts. Technical Skills : * UI development * .NET, Angular, React * Object Oriented Programming, Design Patterns, and development knowledge. * API-driven solution stack, data processing, reactive architecture * Agile development and DevSecOps understanding for end to end development life cycle is required. * Preferably worked in financial domain

Posted 3 days ago

Apply

8.0 - 13.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description We are seeking a seasoned Data Engineering Manager with 8+ years of experience to lead and grow our data engineering capabilities. This role demands strong hands-on expertise in Python, SQL, Spark , and advanced proficiency in AWS and Databricks . As a technical leader, you will be responsible for architecting and optimizing scalable data solutions that enable analytics, data science, and business intelligence across the organization. Key Responsibilities: Lead the design, development, and optimization of scalable and secure data pipelines using AWS services such as Glue, S3, Lambda, EMR , and Databricks Notebooks , Jobs, and Workflows. Oversee the development and maintenance of data lakes on AWS Databricks , ensuring performance and scalability. Build and manage robust ETL/ELT workflows using Python and SQL , handling both structured and semi-structured data. Implement distributed data processing solutions using Apache Spark / PySpark for large-scale data transformation. Collaborate with cross-functional teams including data scientists, analysts, and product managers to ensure data is accurate, accessible, and well-structured. Enforce best practices for data quality, governance, security , and compliance across the entire data ecosystem. Monitor system performance, troubleshoot issues, and drive continuous improvements in data infrastructure. Conduct code reviews, define coding standards, and promote engineering excellence across the team. Mentor and guide junior data engineers, fostering a culture of technical growth and innovation. Qualifications Requirements 8+ years of experience in data engineering with proven leadership in managing data projects and teams. Expertise in Python, SQL, Spark (PySpark) , and experience with AWS and Databricks in production environments. Strong understanding of modern data architecture, distributed systems, and cloud-native solutions. Excellent problem-solving, communication, and collaboration skills. Prior experience mentoring team members and contributing to strategic technical decisions is highly desirable.

Posted 3 days ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

content="Power BI Development (Mandatory requirements)Design and develop interactive dashboards, KPIs, and reports using Power BI.Create data visualizations and reports based on business requirements.Optimize Power BI datasets and reports for performance.Publish reports to Power BI Service and manage user access and permissions.Collaborate with business stakeholders to gather requirements and translate them into effective dashboards.Data EngineeringDevelop and maintain robust ETL/ELT pipelines using tools like ADF, SSIS, or custom scripts. - Nice to have Work with large datasets from multiple sources (SQL, APIs, cloud storage, etc.). - Must haveCreate and maintain data models and data warehouses (Azure Synapse, Snowflake, etc.). - Nice to have Implement data quality checks, logging, and performance monitoring. - Must haveCollaborate with data architects and analysts to define data standards and governance. - Must haveRequired Skills & Experience: (Must Have)4-5 years of professional experience in data engineering and Power BI development.Strong experience with SQL and relational databases (e.g., SQL Server, PostgreSQL, MySQL).Proficient in Power BI (DAX, Power Query, Data Modeling).Hands-on experience with Azure Data Factory, Synapse Analytics, or other cloud-based data tools.Knowledge of Python or other scripting languages for data processing (optional but preferred).Strong understanding of data warehousing concepts and dimensional modelling.Excellent problem-solving, communication, and documentation skills.Strong in Business Analytics skills ">

Posted 3 days ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Our team is building low latency, highly scalable policy computation and storage layer to support pay policies across multiple businesses and regions. We use cutting edge technologies from AWS and continuously challenge ourselves in building the right solution. Time and Attendance teams charter is to build a world-class product which meets attendance and pay computation needs for over 2 Million hourly associates across Amazon businesses. People Technology is the central hub for all Amazon.com people data. Our technology provides the foundation and orchestration for a multitude of key human resource processes, from on-boarding of tens of thousands of temporary employees during peak holiday season to integrating critical employee data to internal and external systems. We implement and build highly secure, global software that allows Amazon.com to effectively manage the workforce, resulting in a better employee experience and a better bottom line. Amazon Time and Attendance (TAA) is looking for talented Software Development Engineers (SDE) to join their team at Hyderabad, India. Amazon continuously pushes the limit to deliver packages and goods to customers as fast as possible. Gaining efficiencies in tracking productivity, time, and attendance is paramount to achieving this goal. You will get a chance to invent new technologies and build custom solutions to help Amazon track time, attendance, and productivity of employees and impact the employee experience. How hard can it be to pay people for the right number of hours worked considering the compliance policies, business policies which vary across country, state, city, business Would you be excited to dive into surprisingly complicated space that is tangible to all Amazonians, with the real-time analytics, surge-traffic handling, fault detection, and data processing by developing new solutions on NAWS/Server less platformsThen you are the person, People Technology is looking for. 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelors degree in computer science or equivalent

Posted 3 days ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Were opening our first ever office in India, and looking to hire for a Software Engineer to advance our mission of building real-time multimodal intelligence. In this role, you'll: Design and build low latency, scalable, and reliable model inference and serving stack for our cutting edge SSM foundation models Work closely with our research team and product engineers to translate cutting edge research into incredible products Build highly parallel, high quality data processing and evaluation infrastructure for foundation model training You'll have significant autonomy to shape our products and directly impact how cutting-edge AI is applied across various devices and applications. What were looking for Given the scale and difficulty of problems we work on, we value strong engineering skills at Cartesia. Strong engineering skills, comfortable navigating complex codebases and monorepos. An eye for craft and writing clean and maintainable code. You're comfortable diving into new technologies and can quickly adapt your skills to our tech stack (Go and Python on the backend, Next.js for the frontend). Experience building large-scale distributed systems with high demands on performance, reliability, and observability. Technical leadership with the ability to execute and deliver zero-to-one results amidst ambiguity. [bonus] Background in or experience working with machine learning and generative models.

Posted 3 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Chennai

Work from Office

Naukri logo

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 3+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less

Posted 3 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Chennai

Work from Office

Naukri logo

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 4+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less

Posted 3 days ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Gurugram

Work from Office

Naukri logo

About this role About this role BlackRock is seeking a highly skilled and motivated Analyst to support its growing and dynamic Client Data function! In this role, you will be responsible to drive the accuracy, quality and consistent use of the most impactful, globally relevant data fields, facilitating scale & efficiency across BLK s global sales and service ecosystem. You will work closely with cross-functional teams including business stakeholders and technical teams for Client Data to establish standards for the entry and maintenance of client data, implement exception monitoring to identify data inconsistencies and complete high-risk updates where required. At BlackRock, we are dedicated to encouraging an inclusive environment where every team member can thrive and contribute to our world-class success. This is your chance to be part of a firm that is not only ambitious but also committed to delivering flawless and proven investment strategies. Key Responsibilities: As a Data Analyst, you will play a pivotal role in ensuring the accuracy and efficiency of our client data. Your responsibilities will include: Data Governance & Quality: Monitor data health and integrity, and ensure data products meet strict standards for accuracy, completeness, and consistency. Conduct regular assessments to identify deficiencies and opportunities for improvement. Data Management : Maintain, cleanse and update records within the Client Relationship Management systems. This may include researching information across a variety of data sources, working with internal client support groups to create data structures that mimic client asset pools and connecting client information across data sources. Process Improvement and Efficiency : Identify and complete process improvements from initial ideation to implementation. Collaborate with cross-functional teams product managers, engineers, and business stakeholders to plan, design, and deliver data products. Quality Assurance : Collaborate with teams to test new CRM features, ensuring tools function accurately and identifying defects for resolution. Collaboration & Communication: Prioritize effectively with various collaborators across BlackRock. Ensure efficient and timely data governance and maintenance in an agile environment. Qualifications & Requirements: We seek candidates who are ambitious, diligent, and have a proven track record in data management. The ideal candidate will possess the following qualifications: Experience: MBA or equivalent experience required; major in Business, Finance, MIS, Computer Science or related fields preferred 1 to 4 years of experience in data management or data processing Financial services industry experience is a plus but not required Skills and Qualifications: Proficiency in SQL; Python experience a plus Proficiency in data management / reporting tools and technologies such as POWER BI a plus Experience with business applications including Excel and PowerPoint Experience working with CRM platforms; Microsoft Dynamics experience a plus Organized and detail-oriented with strong time management skills Self-motivated with a strong focus on service and ability to liaise with many groups across the company Excellent online research skills Exceptional written and verbal communication skills Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model . About BlackRock . This mission would not be possible without our smartest investment - the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. #EarlyCareers Our benefits . Our hybrid work model . About BlackRock . This mission would not be possible without our smartest investment - the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 3 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Medables mission is to get effective therapies to patients faster. We provide an end-to-end, cloud-based platform with a flexible suite of tools that allows patients, healthcare providers, clinical research organizations and pharmaceutical sponsors to work together as a team in clinical trials. Our solutions enable more efficient clinical research, more effective healthcare delivery, and more accurate precision and predictive medicine. Our target audiences are patients, providers, principal investigators, and innovators who work in healthcare and life sciences. Our vision is to accelerate the path to human discovery and medical cures. We are passionate about driving innovation and empowering consumers. We are proactive, collaborative, self-motivated learners, committed, bold and tenacious. We are dedicated to making this world a healthier place. 1. Responsibilities Develop and integrate algorithms for data transfers, compliance reports and other analyses that are useful for measuring the performance and/or delivering a successful clinical trial Present information in a statistically valid and impactful way using data visualization techniques and tools such as R Shiny, Looker and SpotFire Represent the Data Science team in internal and external meetings to discuss data transfer and report requirements Support Data Science process improvement activities under the guidance of the Senior Data Scientists Other duties as assigned 2. Experience 2+ years working in Data Science, Statistics or Programming role or a combination of education and experience 3. Skills Highly analytical with a strength for analysis, math and statistics Critical thinking and problem-solving skills Experience in data processing and mining Analytical mind and business acumen Problem-solving aptitude Excellent communication and presentation skills Confident in data storytelling Knowledge of the drug development industry and the role data plays in clinical trials R programming language, Python, SQL, SAS required 4. Education, Certifications, Licenses Bachelor s degree in Mathematics, Statistics, Data Science or related field 5. Travel Requirements As required At Medable, we believe that our team of Medaballers is our greatest asset. That is why we are committed to your personal and professional well-being. Our rewards are more than just benefits - they demonstrate our commitment to providing an inclusive, healthy and rewarding experience for all our team members. Flexible Work Remote from the start, we believe in a flexible employee experience Compensation Competitive base salaries Annual performance-based bonus Stock options for employees, aligning personal achievements to Medables success Health and Wellness Comprehensive medical, dental, and vision insurance coverage Carrot Fertility Program Health Saving Accounts (HSA) and Flexible Spending Accounts (FSA) Wellness program (Mental, Physical and Financial) Recognition Peer-to-peer recognition program, celebrating achievements and milestones Community Involvement Volunteer time off to support causes you care about Medable is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need assistance or would like to request an accommodation due to a disability, please contact us at hr@medable.com .

Posted 3 days ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description Remote working or Permanent Work from Home Business Unit Mission : The Survey Programmer is responsible for programming market research studies against client materials with a goal to meet and exceed client expectations in terms of integrity of collected data, adherence to agreed timeline and budget. This individual also develops and implements out of the box solutions as needed. The programmer contributes to improving overall department efficiency by adhering to current best practices as well as contributing to the creation of new best practices. This individual works closely with the other programmers and project managers and may work directly with clients on certain projects. A successful candidate has strong programing knowledge of Confirmit (Forsta Plus) and preferably also Decipher (Forsta Surveys) and pharmaceutical market research in general. Essential Duties and Responsibilities: Including, but not limited to the following: 50% - Online Survey Development Programmer maintains full responsibility for online survey development: Thoroughly reviews client materials. Delivers high quality surveys against client s materials within expected timeline. Works to resolve amendments during quality assurance/client testing phase. 30% - Client Focus Programmer strives to exceed the client s satisfaction when programming: Proactively suggests design improvements to ensure panelist comprehension. Meets programming timelines. Resolves data issues in the timeliest and most complete manner. Works directly with the client on custom solutions. 5% - Survey Deployment and Online Reporting Prepares survey for deployment and sets up link to track progress in field. 5% - Deliverables Programmer works with Data Processing to meet client deliverable requirements, i.e., working on dashboards, specific data requirements, etc. 5% - Process Programmer contributes to improve processes by working on tasks assigned that increase capabilities, efficiency, skillset, and marketplace competitive advantage. 5% - Miscellaneous Programmer spends time reporting and receiving information to and from their manager and continues to learn the latest technologies in the data collections field. Essential Job functions: Including, but not limited to the following: Maintain regular and punctual attendance Work cooperatively with others Comply with all company policies and procedures Supervisory Responsibility: No Outcomes: Min 6.4 programming client satisfaction score on a 1-7 scale. Min 94% error-free survey links. Min 98% on-time link delivery with early % deliveries expected. Competencies: Expert knowledge of Confirmit (Forsta Plus) a must (min 2 years required). Market research experience, preferably pharmaceutical (min 1 year required). Knowledge of Decipher (Forsta Surveys) in addition to Confirmit (Forsta Plus) (desirable). JavaScript, jQuery, HTML programming knowledge. Commitment to accuracy and integrity, doing it right the first time. Strong problem-solving skills, including an ability to think outside the box . Adherence to set processes and standards. Qualifications Education and Training Required: Bachelor s or Master s in Computer Science (B.E./B. Tech/MSc/MCA).

Posted 3 days ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary: We are seeking a highly motivated and experienced Data Engineer to design, build, and maintain robust data pipelines. The ideal candidate will have deep expertise in cloud-based data processing technologies using GCP, AWS, or Azure. You will play a critical role in ensuring data reliability, quality, and availability to support our business needs. You will get to work on cutting-edge data infrastructure with a real-world impact. Responsibilities: Design, develop, and maintain scalable data pipelines using cloud services (GCP, AWS, or Azure). Implement data ingestion, transformation, and storage solutions. Ensure data quality, integrity, and security across all data systems. Monitor and troubleshoot data pipelines to identify and resolve issues. Has to ensure a 99.999% uptime & performance SLAs of the production environment. Collaborate with data scientists and other stakeholders to understand data requirements. Optimize data pipelines for performance, reliability, and cost-effectiveness. Create and maintain clear documentation of data flows, system architecture, and operational procedures. Qualifications: B.Tech. / M. Tech. degree or higher educational qualification. 3+ years of experience in Data Engineering or related roles. Must have worked in production deployments, including managing scalable data pipelines that process high-volume, real-time data. Proficiency with cloud services such as GCP, AWS, or Azure is required. Hands-on experience with Distributed Stream Processing Technology ecosystem, including Apache Kafka, Flink or Beam etc. Strong Programming skills in languages such as Python, Scala, or NodeJS. Experience with SQL, NoSQL & especially Timeseries databases. Experience with parallel programming & distributed data processing. Solid understanding of data warehousing concepts and technologies. Ability to design and implement data architectures. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Preferred Qualifications: Experience developing data pipelines & processing services, with a deeper understanding of the Quality Controls. Knowledge of IoT data and related technologies. A Google Cloud Certified Professional Data Engineer will be given preference.

Posted 3 days ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Title/Position: Full Stack Quality Engineer Job Location: Pune Employment Type: Full Time Shift Time: UK Shift Job Overview: We are seeking a detail-oriented and highly motivated Quality Assurance Engineer to join our dynamic team. As a QA Engineer, you will play a crucial role in ensuring the quality and reliability of our software products. The ideal candidate will possess strong analytical skills, a keen eye for detail, and a passion for delivering high-quality software solutions. Responsibilities: Develop and execute test plans, test cases, and automation scripts for ETL pipelines and data validation. Perform SQL-based data validation to ensure data integrity, consistency, and correctness preferably RDS. Work closely with data engineers to test and validate Data and frontend implementations. Automate data quality tests using Python and integrate them into CI/CD pipelines. Use GitHub for version control, managing test scripts, and collaborating on automation frameworks. Debug and troubleshoot data-related issues across different environments. Ensure data security, compliance, and governance requirements are met. Collaborate with stakeholders to define and improve testing strategies for big data solutions. Create and update Automation scripts for Frontend and API testing using frameworks like Selenium, Pytest, and others. Requirements: 3+ years of experience in QA, with a focus on data testing, ETL testing, and data validation. Strong proficiency in SQL for data validation, transformation testing, and debugging. Good working knowledge of Python, Node.js,Angular and GitHub for test automation and version control. Experience with ETL , Frontend testing and Data solutions. Understanding data pipelines, Blue framework and data governance concepts. Experience with cloud platforms (AWS) is a plus. Knowledge of PySpark and distributed data processing frameworks is a plus. Familiarity with CI/CD pipelines and test automation frameworks. Strong problem-solving skills and attention to detail. ",

Posted 3 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a seasoned Data Engineering Manager with 8+ years of experience to lead and grow our data engineering capabilities. This role demands strong hands-on expertise in Python, SQL, Spark , and advanced proficiency in AWS and Databricks . As a technical leader, you will be responsible for architecting and optimizing scalable data solutions that enable analytics, data science, and business intelligence across the organization. Key Responsibilities: Lead the design, development, and optimization of scalable and secure data pipelines using AWS services such as Glue, S3, Lambda, EMR , and Databricks Notebooks , Jobs, and Workflows. Oversee the development and maintenance of data lakes on AWS Databricks , ensuring performance and scalability. Build and manage robust ETL/ELT workflows using Python and SQL , handling both structured and semi-structured data. Implement distributed data processing solutions using Apache Spark / PySpark for large-scale data transformation. Collaborate with cross-functional teams including data scientists, analysts, and product managers to ensure data is accurate, accessible, and well-structured. Enforce best practices for data quality, governance, security , and compliance across the entire data ecosystem. Monitor system performance, troubleshoot issues, and drive continuous improvements in data infrastructure. Conduct code reviews, define coding standards, and promote engineering excellence across the team. Mentor and guide junior data engineers, fostering a culture of technical growth and innovation. Requirements 8+ years of experience in data engineering with proven leadership in managing data projects and teams. Expertise in Python, SQL, Spark (PySpark) ,

Posted 3 days ago

Apply

3.0 - 5.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description. We are seeking a highly skilled and motivated Cloud Data Engineer with a strong background in computer science or statistics, coupled with at least 5 years of professional experience. The ideal candidate will possess a deep understanding of cloud computing, particularly in AWS, and should have a proven track record in data engineering, Big Data applications, and AI/ML applications.. Responsibilities. Cloud Expertise:. Proficient in AWS services such as EC2, VPC, Lambda, DynamoDB, API Gateway, EBS, S3, IAM, and more.. Design, implement, and maintain scalable cloud-based solutions.. Execute efficient and secure cloud infrastructure configurations.. Data Engineering:. Develop, construct, test, and maintain architectures, such as databases and processing systems.. Utilize coding skills in Spark and Python for data processing and manipulation.. Administer multiple ETL applications to ensure seamless data flow.. Big Data Applications:. Work on end-to-end Big Data application projects, from conception to deployment.. Optimize and troubleshoot Big Data solutions to ensure high performance.. AI/ML Applications:. Experience in developing and deploying AI/ML applications based on NLP, CV, and GenAI.. Collaborate with data scientists to implement machine learning models into production environments.. DevOps and Infrastructure as a Service (IaaS):. Possess knowledge and experience with DevOps applications for continuous integration and deployment.. Set up and maintain infrastructure as a service, ensuring scalability and reliability.. Qualifications. Bachelor’s degree in computer science, Statistics, or a related field.. 5+ years of professional experience in cloud computing, data engineering, and related fields.. Proven expertise in AWS services, with a focus on EC2, VPC, Lambda, DynamoDB, API Gateway, EBS, S3, IAM, etc.. Proficient coding skills in Spark and Python for data processing.. Hands-on experience with Big Data application projects.. Experience in AI/ML applications, particularly in NLP, CV, and GenAI.. Administration experience with multiple ETL applications.. Knowledge and experience with DevOps tools and processes.. Ability to set up and maintain infrastructure as a service.. Soft Skills. Strong analytical and problem-solving skills.. Excellent communication and collaboration abilities.. Ability to work effectively in a fast-paced and dynamic team environment.. Proactive mindset with a commitment to continuous learning and improvement.. Show more Show less

Posted 3 days ago

Apply

2.0 - 4.0 years

15 - 22 Lacs

Noida

Work from Office

Naukri logo

Role Description: This is a full-time role for a Software Developer (C++/Golang) based in Noida. You will develop and optimize in-house low-latency platforms that support our high-frequency trading (HFT) and arbitrage strategies. Collaborating closely with cross-functional teams, you will enhance system performance and ensure reliable data processing. Key Responsibilities: Create and maintain high-performance trading applications using C++/Golang. Design and implement low-latency platforms to support trading strategies. Optimize existing systems for improved performance and reduced latency. Work with real-time data processing and apply latency reduction techniques. Collaborate with teams to enhance HFT and arbitrage strategies. Analyze system performance metrics and identify areas for improvement. Resolve bottlenecks to enhance code speed and efficiency. Implement and optimize high-performance, high-availability distributed systems. Document software design and implementation processes clearly. Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Minimum of 2 years of experience with low-latency platforms and high-performance distributed systems. Familiarity with Linux/Unix operating systems Experience with Git or other version control systems Strong analytical skills with a systematic approach to problem-solving. Preferred Skills: Experience in developing low-latency trading systems. Understanding of financial markets and trading concepts. Have good knowledge of parallel programming paradigms. Have a good grasp over memory management. Experience with Devops workflows involving but not limited to Ansible and Jenkins. Excellent problem-solving and critical thinking abilities. Strong communication and collaboration skills within fast-paced, multidisciplinary teams.

Posted 3 days ago

Apply

1.0 - 3.0 years

6 - 9 Lacs

Mumbai

Work from Office

Naukri logo

At Jacobs, we're challenging today to reinvent tomorrow by solving the world's most critical problems for thriving cities, resilient environments, mission-critical outcomes, operational advancement, scientific discovery and cutting-edge manufacturing, turning abstract ideas into realities that transform the world for good.. Your impact. Data Entry Operator. We value collaboration and believe that in-person interactions are crucial for both our culture and client delivery. We empower employees with our hybrid working policy, allowing them to split their work week between Jacobs offices/projects and remote locations enabling them to deliver their best work.. Here's what you'll need. 10+ years. Show more Show less

Posted 3 days ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Job Title: Data Engineer. Job Type: Full-Time. Location: On-site Hyderabad, Telangana, India. Job Summary:. We are seeking an accomplished Data Engineer to join one of our top customer's dynamic team in Hyderabad. You will be instrumental in designing, implementing, and optimizing data pipelines that drive our business insights and analytics. If you are passionate about harnessing the power of big data, possess a strong technical skill set, and thrive in a collaborative environment, we would love to hear from you.. Key Responsibilities:. Develop and maintain scalable data pipelines using Python, PySpark, and SQL.. Implement robust data warehousing and data lake architectures.. Leverage the Databricks platform to enhance data processing and analytics capabilities.. Model, design, and optimize complex database schemas.. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.. Lead and mentor junior data engineers and establish best practices.. Troubleshoot and resolve data processing issues promptly.. Required Skills and Qualifications:. Strong proficiency in Python and PySpark.. Extensive experience with the Databricks platform.. Advanced SQL and data modeling skills.. Demonstrated experience in data warehousing and data lake architectures.. Exceptional problem-solving and analytical skills.. Strong written and verbal communication skills.. Preferred Qualifications:. Experience with graph databases, particularly MarkLogic.. Proven track record of leading data engineering teams.. Understanding of data governance and best practices in data management.. Show more Show less

Posted 3 days ago

Apply

4.0 - 6.0 years

2 - 6 Lacs

Vadodara

Work from Office

Naukri logo

We aspire to be world-leader in innovative telecom and security solutions by offering cutting-edge, high-performance telecom and security solutions to business customers. Our Mission is simple. To prove that Indian engineers can design, develop, and manufacture world-class technology products for customers across the world, right from India. Join our team of like-minded engineers, applied researchers, and technocrats with the will, courage, and madness to achieve this mission! Why work at Matrix Matrix fully integrates software and hardware across its products. Engineers here collaborate more effectively to create solutions that solve real problems and make an impact. We are responsible for every nut, bolt, and line of code in our products! As an engineer, your involvement will be critical in the entire lifecycle of a product - right from ideation-development-production-deployment. Get to feel the sense of accomplishment that comes with creating something that solves a real and pressing problem and is used by scores of customers. Join our team of like-minded engineers, applied researchers, and technocrats with the will, courage, and madness to achieve this mission! Role SME- IVA Function Software Development - IVA / Computer Vision Work Location Vadodara, Gujarat Who are you You are a dynamic and solution driven Software Engineer specializing in Intelligent Video Analytics, with a strong foundation in designing and developing advanced systems for video data processing and analysis. You excel in creating innovative solutions leveraging computer vision, machine learning, and deep learning techniques to extract meaningful insights from video streams. Computer Vision is a critical part of our internal Computer Vision/Video Analytics team. This team specializes in applied research and is central to the perception algorithms deployed across our products You will remain rooted as an engineer, a designer, and a technologist while leading a team.You take complete ownership of timely product delivery with impeccable software quality. You have the ability to navigate the teams through fast changing market needs. You possess strong people leadership skills in mentoring the young engineers Experience 4+ Years Qualification B.E/B.tech/M.E/ M.tech (EC, Electronics, Electronics & Telecommunication, Computer Engineering, CSE)or related field Technical Skills Required : Hands on with C++, OOPS, Computer Vision, OpenCV, TensorFlow, Web Orientation, Web Development, OpenCV, Image Processing. Hands-on experience of Deep Learning/Computer Vision projects. Working exposure with Neural networks, CNN i.e. Convolutional Neural Network (Design, Training, Inference). Experience building and optimizing libraries for fast computer vision Experience curating data, creating and optimizing models, and running experiments Expert knowledge of the principles, algorithms, and tools of applied Deep Learning, including model training strategies, data processing, cost functions, quality metrics, etc. Experience with one or more deep learning frameworks such as Tensorflow, PyTorch or Keras Experience porting and optimizing models and algorithms for different hardware platforms Strong programming skills using Python and C++ Superhuman thirst for learning including scouting for ideas and keeping up-to-date with advancements in the field Good to have skills : Any exposure to projects like Object Detection/Motion Detection/Face Recognition is good to have. Experience working with multiple video streams for simultaneous integration into machine learning algorithms Experience optimizing deep learning models for embedded form factors What your day might look like (Role) The SME - Computer Vision is a critical part of our internal Computer Vision/Video Analytics team. This team specializes in applied research and is central to the perception algorithms deployed across our products. This team partners with our hardware, software, and platforms teams to help bring our products and solutions to life. An ideal candidate for this role will lead in the creation of systems, algorithms, and optimizations for all things computer vision This includes leading and mentoring a team of engineers who help with different aspects of the task. Collaborate with the team to understand project requirements and objectives. Design and implement algorithms using C/C++, OpenCV, and TensorFlow. Integrate video analytics solutions with existing systems or platforms. Participate in design reviews and brainstorming sessions with cross-functional teams. Execute AI algorithms using various platform specific frameworks like TensorRT, TensorFlow Lite etc. Formulate program specifications and basic prototypes. Efficiently Develop PoC on Algorithms Performance Centric implementation practices. Implement low cost algorithm modules as per need. Troubleshoot issues and provide technical support during integration and deployment phases. Research new technologies and methodologies to enhance video analytics capabilities. Document your work, including writing code documentation or preparing technical reports. What do we offer Opportunity to work for an Indian Tech Company creating incredible products for the world, right from India Be part of a challenging, encouraging, and rewarding environment to do the best work of your life Competitive salary and other benefits Generous leave schedule of 21 days in addition to 9 public holidays, including holiday adjustments to convert weekends into long weekends 5-day workweek with 8 flexi-days months, allowing you to take care of responsibilities at home and work Company-paid Medical Insurance for the whole family (Employee+Spouse+Kids+Parents). Company paid Accident Insurance for the Employee On-premise meals, subsidized by the company If you are an Innovative Tech-savvy individual. Look no further. Click on Apply and we will reach out to you soon!

Posted 3 days ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Gurugram

Work from Office

Naukri logo

About NCR Atleos Overview Data is at the heart of our global financial network. In fact, the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. Our goal is to build and maintain a leading-edge data platform that provides highly available , consistent data of the highest quality for all users of the platform, including our customers, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to NCR Atleos , powering our future growth. Data & AI Engineers at NCR Atleos experience working at one of the largest and most recognized financial companies in the world, while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and AI experts to deliver high quality AI solutions and derived data to our consumers. We are looking for Data & AI Engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Engineers looking to work in the areas of orchestration, data modelling , data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities As a Data Engineer, you will be joining a Data & AI team transforming our global financial network and improving the quality of our products and services we provide to our customers. and you will be responsible for designing, implementing, and maintaining data pipelines and systems to support the organizations data needs. Your role will involve collaborating with data scientists, analysts, and other stakeholders to ensure data accuracy, reliability, and accessibility. Key Responsibilities Data Pipeline DevelopmentDesign, build, and maintain scalable and efficient data pipelines to collect, process, and store structured and unstructured data from various sources. Data IntegrationIntegrate data from multiple sources such as databases, APIs, flat files, and streaming platforms into centralized data repositories. Data ModelingDevelop and optimize data models and schemas to support analytical and operational requirements. Implement data transformation and aggregation processes as needed. Data Quality AssuranceImplement data validation and quality assurance processes to ensure the accuracy, completeness, and consistency of data throughout its lifecycle. Performance Optimization Monitor and optimize data processing and storage systems for performance, reliability, and cost-effectiveness. Identify and resolve bottlenecks and inefficiencies in data pipelines and leverage Automation and AI to improve overall Operations. Infrastructure ManagementManage and configure cloud-based or on-premises infrastructure components such as databases, data warehouses, compute clusters, and data processing frameworks. CollaborationCollaborate with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders to understand data requirements and deliver solutions that meet business objectives . Documentation and Best PracticesDocument data pipelines, systems architecture, and best practices for data engineering. Share knowledge and provide guidance to colleagues on data engineering principles and techniques. Continuous ImprovementStay updated with the latest technologies, tools, and trends in data engineering and recommend improvements to existing processes and systems. Qualifications and Skills: Bachelors degree or higher in Computer Science, Engineering, or a related field. Proven experience in data engineering or related roles, with a strong understanding of data processing concepts and technologies. Mastery of programming languages such as Python, Java, or Scala. Knowledge of database systems such as SQL, NoSQL, and data warehousing solutions. Knowledge of stream processing technologies such as Kafka or Apache Beam. Experience with distributed computing frameworks such as Apache Spark, Hadoop, or Apache Flink . Experience deploying pipelines in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in implementing enterprise systems in production setting for AI, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus . Have full stack experience to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions with focus on privacy, security, fairness. Have good engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats for as-is consumption and display on the dashboards/portals. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data visualization tools such as Tableau or Power BI. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agenciesNCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes.

Posted 3 days ago

Apply

4.0 - 7.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Date 18 Jun 2025 Location: Bangalore, KA, IN Company Alstom At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling, and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. Your future role Take on a new challenge and apply your expertise in data solutions to a cutting-edge field. Youll work alongside innovative and collaborative teammates. You'll play a pivotal role in defining, developing, and sustaining advanced data solutions that empower our industrial programs. Day-to-day, youll work closely with teams across the business (such as IT, engineering, and operations), design scalable data models, ensure data quality, and much more. Youll specifically take care of building multi-tenant data collectors and processing units, as well as creating customizable analytical dashboards, but also evaluate opportunities from emerging technologies. Well look to you for: Designing technical solutions for production-grade and cyber-secure data systems Building multi-tenant data storage and streaming solutions for batch and near-real-time flows Creating scalable data models, including SQL and NoSQL modeling Enhancing data quality and applying robust data management and security practices Developing customizable analytical dashboards Applying strong testing and quality assurance practices All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: 5 to 10 years of experience in IT, digital companies, software development, or startups Extensive experience with data processing and software development in Python or Java/Scala environments Proficiency in developing solutions with Apache Spark, Apache Kafka, and/or Nifi for production Expertise in data modeling and SQL database configuration (e.g., Postgres, MariaDB, MySQL) Knowledge of DevOps practices, including Docker Experience with Git and release management Familiarity with cloud platforms like Microsoft Azure, AWS, or GCP (desirable) Understanding of network and security protocols such as SSL, certificates, IPSEC, Active Directory, and LDAP (desirable) Knowledge of machine learning frameworks like scikit-learn, R, or TensorFlow (desirable) Good understanding of the Apache open-source ecosystem (desirable) Fluent English; French is a plus Things youll enjoy Join us on a life-long transformative journey the rail industry is here to stay, so you can grow and develop new skills and experiences throughout your career. Youll also: Enjoy stability, challenges, and a long-term career free from boring daily routines Work with new security standards for rail signalling Collaborate with transverse teams and helpful colleagues Contribute to innovative projects Utilise our flexible and inclusive working environment Steer your career in whatever direction you choose across functions and countries Benefit from our investment in your development, through award-winning learning Progress towards senior data leadership roles Benefit from a fair and dynamic reward package that recognises your performance and potential, plus comprehensive and competitive social coverage (life, medical, pension) You dont need to be a train enthusiast to thrive with us. We guarantee that when you step onto one of our trains with your friends or family, youll be proud. If youre up for the challenge, wed love to hear from you! Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone.

Posted 3 days ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Job Purpose Data Analyst plays a crucial lead role in managing and optimizing business intelligence solutions using Power BI. Leadership and StrategyLead the design, development, and deployment of Power BI reports and dashboards. Provide strategic direction for data visualization and business intelligence initiatives. Interface with Business Owner, Project Manager, Planning Manager, Resource Managers etc. Develop roadmap for execution of complex data analytics projects. Data Modeling and IntegrationDevelop complex data models, establish relationships, and ensure data integrity. Oversee data integration from various sources. Advanced AnalyticsPerform advanced data analysis using DAX (Data Analysis Expressions) and other analytical tools to derive insights and support decision-making. CollaborationWork closely with stakeholders to gather requirements, define data needs, and ensure the delivery of high-quality BI solutions. Performance OptimizationOptimize solutions for performance, ensuring efficient data processing and report rendering. MentorshipMentor and guide junior developers, providing technical support and best practices for Power BI development. Data SecurityImplement and maintain data security measures, ensuring compliance with data protection regulations. Demonstrated experience of leading complex projects with a team of varied experience levels. You are meant for this job if: Educational BackgroundBachelors or Masters degree in Computer Science, Information Systems, or a related field. Experience in working with unstructured data and data integration. Technical Skills: Proficiency in Power BI, DAX, SQL, and data modeling, exposure to data engineering. Experience with data integration tools and ETL processes. Hands-on experience with Snowflake Experience7-8 years of experience in business intelligence and data analytics, with a focus on Power BI. Soft Skills: Strong analytical and problem-solving skills, excellent communication abilities, and the capacity to lead and collaborate with global cross-functional teams. Skills Change Leadership Process Mapping

Posted 3 days ago

Apply

1.0 - 9.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate Software Engineer - R&D Omics What you will do Let s do this. Let s change the world. In this vital role you will be responsible for Research Informatics and you will be responsible for development and maintenance of software in support of target/biomarker discovery at Amgen. This role requires proficiency in code development (e. g. Python, R, etc), and some knowledge of CI/CD processes and cloud computing technologies (e. g. AWS, Google Cloud, etc). Additionally, the ability to work with cross functional teams and experience in agile practices is desired. Develop software to transform and visualize omics (genomics, proteomics, transcriptomics) data using programming languages such as Python, Java, R. Develop data processing pipelines for large datasets in the cloud (e. g. Nextflow); integrate with other data sources where applicable Collaborate with the other engineering team members to ensure all services are reliable, maintainable, and well-integrated into existing platforms Adhere to best practices for testing and designing reusable code What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master s degree and 1 to 3 years of in Software Development, IT, or related field, OR Bachelor s degree and 3 to 5 years of in Software Development, IT, or related field, OR Diploma and 7 to 9 years of in Software Development, IT, or related field. Preferred Qualifications: 2+ years of experience in biopharma or life sciences Experience in RESTFul API development e. g flask, MuleSoft Experience in pipeline development using one or more of the following programming languages (Python, Nextflow, etc) Experience with Databricks Experience with cloud computing platforms and infrastructure Experience with Application development (Django, RShiny, Ploty Dash, etc) Work experience in the biotechnology or pharmaceutical industry. Experience using and adopting Agile Framework Soft Skills: Strong learning agility, ability to pick up new technologies used to support early drug discovery data analysis needs Collaborative with good communication skills. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented with a focus on achieving team goals. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 3 days ago

Apply

5.0 - 14.0 years

10 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: Log streaming knowledge with Splunk, Cribl expertise, Linux OS experience, Hyperscaler log streaming knowledge, CI/CD pipeline, python scripting understanding, 5 -7 years experience. 12-14 years experience Multicloud architect -Expertise on Hyperscalers (AWS/AZURE/GCP). Good knowledge and understanding of hyperscaler services Design and implement secure, scalable multi-cloud architectures that integrate cloud security best practices. Onapsis knowledge is good to have Develop and maintain security frameworks and models that align with industry standards (e. g. , NIST, CIS) for multi-cloud environments. Implement security policies, procedures, and tools for securing workloads, data, and applications across multiple cloud platforms. Ensure that data protection practices are in place across all cloud environments, including data encryption, key management, and secure data transfer. Ensure compliance with industry standards and regulatory requirements (e. g. , GDPR, HIPAA, SOC 2, PCI-DSS) in multi-cloud environments. Lead cloud security audits and assessments to verify compliance and security posture, and manage remediation efforts where necessary. Strong understanding of cloud-native security practices and tools (e. g. , CloudFormation, Terraform, Kubernetes, Docker). Should have experience in driving the projects with team E5-Onapsis Architect Job Summary: As an Onapsis Architect, you will play a key role in the implementation and configuration of Onapsis solutions for our customers. You will work closely with the customer s IT and security teams to deploy Onapsis products effectively. You will also be responsible for providing technical support during the implementation process and offering guidance on best practices. 12-14 years experience Expertise in Deploy Onapsis products in customer environments, including initial setup and configuration Coordinate with internal teams and clients to ensure smooth product installations. Provide technical assistance during the deployment process. Contribute to the development of best practices for Onapsis deployments. Export Onapsis vulnerability data using REST APIs to 3rd party systems for dashboards and reporting Experience on SAP BASIS/HANA and FRUN, ABAP, HANA, Webdispatcher, Netweaver, Java, BOBJ Linux /networking knowledge Experience with security tools and platforms (e. g. , SIEM, vulnerability management, etc. ). Should have experience in driving the projects and guiding the team Strong communication skills and ability to work directly with customers. E4 - Senior engineer Job Summary: As a skilled Log Onboarding Engineer to join our team, specializing in the integration, onboarding, and management of logs into Cribl Splunk and other related systems. The role requires in-depth experience with log management tools, specifically Cribl , Hyperscalers(AWS/AZURE/GCP) to optimize the flow of logs, transform data, and ensure proper routing to Splunk and other Destinations for analysis. This position will help maintain the integrity and performance of our custom services which we are offering to our cusotmers 8-10 years experince Logstreaming observability knowledge with Cribl splunk Linux os/Networking experience Hyperscaler(AWS/GCP/AZURE) log streaming knowledge Understanidng of Hyperscaler services CI/CD pipeline python scripting understanding Design, implement, and manage log ingestion pipelines into Splunk and third party destinations using Cribl for data transformation, filtering, and routing. Configure log forwarding and integration from various sources (Hyperscaler services, network devices, firewalls, servers, applications) to Cloud storages and eventstreaming solutions. Troubleshoot, optimize, and ensure the smooth flow of data into Log destinations for real-time analysis and alerting. Leverage Cribl to transform raw log data, enrich it with additional context, and ensure it is properly formatted and routed before sending it to Splunk or other downstream systems. Build and manage data processing pipelines to filter out irrelevant or noisy data and retain important log information. Create and maintain Cribl Pipelines for automated log enrichment, anonymization, and masking (if necessary). Ensure proper log collection, normalization, and retention to meet regulatory and organizational security requirements. Work with Security Operations (SecOps) teams to ensure the right logs are captured for threat detection, incident response, and compliance purposes. Collaborate with cross-functional teams (DevOps, Security, IT) to understand and define log onboarding requirements. Solid understanding of SIEM concepts and how log data is used for security monitoring and compliance. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 3 days ago

Apply

Exploring Data Processing Jobs in India

The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Pune
  5. Hyderabad

These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.

Average Salary Range

The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.

Career Path

A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.

Related Skills

In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.

Interview Questions

  • What is data processing? (basic)
  • Explain the difference between data cleaning and data transformation. (medium)
  • How do you handle missing data in a dataset? (medium)
  • What is the importance of data normalization in data processing? (medium)
  • Can you explain the process of feature selection in machine learning? (advanced)
  • How do you evaluate the performance of a machine learning model? (advanced)
  • What is the difference between supervised and unsupervised learning? (basic)
  • How do you deal with outliers in a dataset? (medium)
  • Explain the concept of dimensionality reduction. (medium)
  • What is the bias-variance tradeoff in machine learning? (advanced)
  • How would you handle a dataset with high dimensionality? (medium)
  • Can you explain the process of clustering in data processing? (medium)
  • What is the role of regularization in machine learning? (advanced)
  • How do you assess the quality of a machine learning model? (medium)
  • Can you explain the concept of overfitting in machine learning? (basic)
  • What is the difference between classification and regression in machine learning? (basic)
  • How do you select the right algorithm for a machine learning task? (medium)
  • Explain the process of data preprocessing in machine learning. (medium)
  • How do you handle imbalanced datasets in machine learning? (medium)
  • What is the purpose of cross-validation in machine learning? (medium)
  • Can you explain the difference between batch processing and real-time processing? (medium)
  • How do you handle categorical data in a dataset? (basic)
  • What is the role of data visualization in data processing? (basic)
  • How do you ensure data security and privacy in data processing? (medium)
  • What are the advantages of using cloud computing for data processing? (medium)

Closing Remark

As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies