Jobs
Interviews

6093 Scala Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location Bengaluru, Karnataka, India Job ID R-232528 Date posted 28/07/2025 Job Title: Analyst – Data Engineer Introduction to role: Are you ready to make a difference in the world of data science and advanced analytics? As a Data Engineer within the Commercial Strategic Data Management team, you'll play a pivotal role in transforming data science solutions for the Rare Disease Unit. Your mission will be to craft, develop, and deploy data science solutions that have a real impact on patients' lives. By leveraging cutting-edge tools and technology, you'll enhance delivery performance and data engineering capabilities, creating a seamless platform for the Data Science team and driving business growth. Collaborate closely with the Data Science and Advanced Analytics team, US Commercial leadership, Sales Field Team, and Field Operations to build data science capabilities that meet commercial needs. Are you ready to take on this exciting challenge? Accountabilities: Collaborate with the Commercial Multi-functional team to find opportunities for using internal and external data to enhance business solutions. Work closely with business and advanced data science teams on cross-functional projects, delivering complex data science solutions that contribute to the Commercial Organization. Manage platforms and processes for complex projects using a wide range of data engineering techniques in advanced analytics. Prioritize business and information needs with management; translate business logic into technical requirements, such as creating queries, stored procedures, and scripts. Interpret data, process it, analyze results, present findings, and provide ongoing reports. Develop and implement databases, data collection systems, data analytics, and strategies that optimize data efficiency and quality. Acquire data from primary or secondary sources and maintain databases/data systems. Identify and define new process improvement opportunities. Manage and support data solutions in BAU scenarios, including data profiling, designing data flow, creating business alerts for fields, and query optimization for ML models. Essential Skills/Experience: BS/MS in a quantitative field (Computer Science, Data Science, Engineering, Information Systems, Economics) 5+ years of work experience with DB skills like Python, SQL, Snowflake, Amazon Redshift, MongoDB, Apache Spark, Apache Airflow, AWS cloud and Amazon S3 experience, Oracle, Teradata Good experience in Apache Spark or Talend Administration Center or AWS Lambda, MongoDB, Informatica, SQL Server Integration Services Experience in building ETL pipeline and data integration Build efficient Data Management (Extract, consolidate and store large datasets with improved data quality and consistency) Streamlined data transformation: Convert raw data into usable formats at scale, automate tasks, and apply business rules Good written and verbal skills to communicate complex methods and results to diverse audiences; willing to work in a cross-cultural environment Analytical mind with problem-solving inclination; proficiency in data manipulation, cleansing, and interpretation Experience in support and maintenance projects, including ticket handling and process improvement Setting up Workflow Orchestration (Schedule and manage data pipelines for smooth flow and automation) Importance of Scalability and Performance (handling large data volumes with optimized processing capabilities) Experience with Git Desirable Skills/Experience: Knowledge of distributed computing and Big Data Technologies like Hive, Spark, Scala, HDFS; use these technologies along with statistical tools like Python/R Experience working with HTTP requests/responses and API REST services Familiarity with data visualization tools like Tableau, Qlik, Power BI, Excel charts/reports Working knowledge of Salesforce/Veeva CRM, Data governance, and Data mining algorithms Hands-on experience with EHR, administrative claims, and laboratory data (e.g., Prognos, IQVIA, Komodo, Symphony claims data) Good experience in consulting, healthcare, or biopharmaceuticals When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, you'll find an environment where your work truly matters. Embrace the opportunity to grow and innovate within a rapidly expanding portfolio. Experience the entrepreneurial spirit of a leading biotech combined with the resources of a global pharma. You'll be part of an energizing culture where connections are built to explore new ideas. As a member of our commercial team, you'll meet the needs of under-served patients worldwide. With tailored development programs designed for skill enhancement and fostering empathy for patients' journeys, you'll align your growth with our mission. Supported by exceptional leaders and peers across marketing and compliance, you'll drive change with integrity in a culture celebrating diversity and innovation. Ready to make an impact? Apply now to join our team! Date Posted 29-Jul-2025 Closing Date 04-Aug-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Engage in code reviews to ensure adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration and ETL processes. - Experience with cloud-based data solutions and analytics. - Familiarity with programming languages such as Python or Scala. - Knowledge of data visualization techniques and tools. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required., 15 years full time education

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and problem-solving among team members. Roles & Responsibilities: Minimum of 4 years of experience in data engineering or similar roles. Proven expertise with Databricks and data processing frameworks. Technical Skills SQL, Spark, Py spark, Databricks, Python, Scala, Spark SQL Strong understanding of data warehousing, ETL processes, and data pipeline design. Experience with SQL, Python, and Spark. Excellent problem-solving and analytical skills. Effective communication and teamwork abilities. Professional & Technical Skills: Experience and knowledge of Azure SQL Database, Azure Data Factory, ADLS Additional Information: - The candidate should have minimum 5 years of experience in Microsoft Azure Data Services. - This position is based in Pune. - A 15 year full time education is required.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences. We’re passionate about empowering people to craft beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to building exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Digital Experience (DX) (https://www.adobe.com/experience-cloud.html) is a USD 3B+ business serving the needs of enterprise businesses including 95%+ of fortune 500 organizations. Adobe Journey Optimizer (AJO) within DX provides a platform for designing cross-channel customer experiences and provides an environment for visual campaign orchestration, real time interaction management and cross channel execution. It is built natively on the Adobe Experience Platform and combines a unified, real-time customer profile, an API-first open framework, centralized offer decisioning, and artificial intelligence (AI) and machine learning (ML) for personalization and optimization. Beyond the usual responsibility of designing, developing, documenting, and thoroughly testing code, Computer Scientists @ Adobe would own features of varying complexity, which may require understanding interactions with other parts of the system, moderately sophisticated algorithms and good design judgment. We are looking for strong and passionate engineers to join our team as we scale the business by building the next gen products and contributing to our existing offerings. What you'll do This is an individual contributor position. Expectations will be on the below lines: Responsible for design and architecture of new products. Work in full DevOps mode, be responsible for all phases of engineering. From early specs, design/architecture, technology choice, development, unit-testing/integration automation, and deployment. Collaborate with architects, product management and other engineering teams to build the technical vision, and road map for the team. Build technical specifications, prototypes and presentations to communicate your ideas. Be well versed in emerging industry technologies and trends, and have the ability to communicate that knowledge to the team and use it to influence product direction. Orchestrate with team to develop a product or parts of a large product. Requirements B.Tech / M.Tech degree in Computer Science from a premier institute. 7-9.5years of relevant experience in software development. Should have excellent computer science fundamentals and a good understanding of design, and performance of algorithms Proficient in Java/Scala Programming Proficient in writing code that is reliable, maintainable, secure, and performant Knowledge of Azure services and/or AWS. Internal Opportunities We’re glad that you’re pursuing career development opportunities at Adobe. Here’s what you’ll need to do: Apply with your complete LinkedIn profile or resume/CV. Schedule a Check-in meeting with your manager to discuss this internal opportunity and your career aspirations. Check-ins should include ongoing discussions about expectations, feedback and career development. Learn more about Check-in here. Learn more about the internal career opportunities process in this FAQ. If you’re contacted for an interview, here are some tips. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely. If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer. Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, or veteran status. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.

Posted 1 week ago

Apply

3.0 years

4 Lacs

Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 1 week ago

Apply

5.0 - 9.0 years

3 - 9 Lacs

No locations specified

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Associate IS Architect What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to deliver actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Standup and enhance BI reporting capabilities through Cognos, PowerBI or similar tools. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementatio What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree with 5- 9 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing Experience with data warehousing platforms such as Amazon Redshift, or Snowflake. Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases. Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets. Experience in BI reporting tools such as Cognos, PowerBI and/or Tableau Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena) Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations Understanding of machine learning pipelines and frameworks for ML/AI models Professional Certifications: AWS Certified Data Engineer (preferred) Databricks Certified (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role Refer to responsibilities You will be responsible for Job Summary: Build solutions for the real-world problems in workforce management for retail. You will work with a team of highly skilled developers and product managers throughout the entire software development life cycle of the products we own. In this role you will be responsible for designing, building, and maintaining our big data pipelines. Your primary focus will be on developing data pipelines using available tec hnologies. In this job, I’m accountable for: Following our Business Code of Conduct and always acting with integrity and due diligence and have these specific risk responsibilities: -Represent Talent Acquisition in all forums/ seminars pertaining to process, compliance and audit -Perform other miscellaneous duties as required by management -Driving CI culture, implementing CI projects and innovation for withing the team -Design and implement scalable and reliable data processing pipelines using Spark/Scala/Python &Hadoop ecosystem. -Develop and maintain ETL processes to load data into our big data platform. -Optimize Spark jobs and queries to improve performance and reduce processing time. -Working with product teams to communicate and translate needs into technical requirements. -Design and develop monitoring tools and processes to ensure data quality and availability. -Collaborate with other teams to integrate data processing pipelines into larger systems. -Delivering high quality code and solutions, bringing solutions into production. -Performing code reviews to optimise technical performance of data pipelines. -Continually look for how we can evolve and improve our technology, processes, and practices. -Leading group discussions on system design and architecture. -Manage and coach individuals, providing regular feedback and career development support aligned with business goals. -Allocate and oversee team workload effectively, ensuring timely and high-quality outputs. -Define and streamline team workflows, ensuring consistent adherence to SLAs and data governance practices. -Monitor and report key performance indicators (KPIs) to drive continuous improvement in delivery efficiency and system uptime. -Oversee resource allocation and prioritization, aligning team capacity with project and business demands. Key people and teams I work with in and outside of Tesco: People, budgets and other resources I am accountable for in my job: TBS & Tesco Senior Management TBS Reporting Team Tesco UK / ROI/ Central Europe Any other accountabilities by the business Business stakeholders Operational skills relevant for this job: Experience relevant for this job: Skills: ETL, YARN,Spark, Hive,Hadoop,PySpark/Python • 7+ years of experience inbuilding and maintaining big data (anyone) Linux/Unix/Shell environments(anyone), Query platforms using Spark/Scala. optimisation • Strong knowledge of distributed computing principles and big Good to have: Kafka, restAPI/reporting tools. data technologies such as Hadoop, Spark, Streaming etc. • Experience with ETL processes and data modelling. • Problem-solving and troubleshooting skills. • Working knowledge on Oozie/Airflow. • Experience in writing unit test cases, shell scripting. • Ability to work independently and as part of a team in a fast-paced environment. You will need Refer to responsibilities Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Spark, Kafka Netskope is Looking for: Sr. Software Engineer, IoT Security About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope One SASE combines Netskope’s market-leading Intelligent SSE with its next-generation Borderless SD-WAN to protect users, applications, and data everywhere with AI-powered zero trust security, while providing fast, reliable access and optimized connectivity to any application from any network location or device, including IoT– at scale. Click here to learn more about Netskope IoT Security. What's In It For You As a member of the IoT Security Team you will be working on some of the most challenging problems in the field of zero trust and IoT security. You will play a key role in the design, development, evolution and operation of a system that analyzes hundreds of parameters from discovered devices and leverages our rich contextual intelligence for device classification, risk assessment, granular access control and network segmentation. What You Will Be Doing Contributing to design and development, scaling and operating Netskope IoT Security. Identifying and incorporating emerging technologies and best practices to the team. Refining existing technologies to make the product more performant Develop OT security part of the solution. Ownership of all cloud components and drive architecture and design. Engaging in cross functional team conversations to help prioritize tasks, communicate goals clearly to team members, and overall project delivery. Required Skills And Experience Scala and Java Writing OOP and Functional Programming Writing UDF Using of Scala with Spark Collection Framework Logging Sending Metrics to Grafana Spark and Kafka Understanding of RDD, DataFrames and DataSets Broadcast Variables Spark Streaming with Kafka Understanding Spark cluster settings Executors and Driver setup Understanding of Kafka Topics and Offsets Good knowledge of Python programming , microservices architecture , REST APIs is also desired Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior Software Engineer Experience: 10+ Years Top Skills: Java, Spring, Scala, AWS, Spark, SQL Work Mode: Hybrid - 3 days from the office Work Location: Marathahalli, Bangalore. Employer: Global Product Company - Established 1969 Why Join Us? Be part of a global product company with over 50 years of innovation. Work in a collaborative and growth-oriented environment. Help shape the future of digital products in a rapidly evolving industry. Required Job Skills and Abilities: 10+ years’ experience in designing and developing enterprise level software solutions 3 years’ experience developing Scala / Java applications and microservices using Spring Boot 7 years’ experience with large volume data processing and big data tools such as Apache Spark, SQL, Scala, and Hadoop technologies 5 years’ experience with SQL and Relational databases 2 year Experience working with the Agile/Scrum methodology

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Senior Data Architect – Fintech Data Lakes Location: Remote Department: Enterprise Data & Analytics / Technology Reports to: Chief Data Officer (CDO) or Head of Data Engineering Role Highlights Senior-level technical architect, strong cloud experience (GCP + Azure) Specialized in data lakes , compliance, real-time & batch pipelines Financial services / fintech domain knowledge (e.g., ledgers, payment rails, PII compliance) Expertise in SQL + Python/Scala/Java Mentoring, governance, and cross-functional advisory Factors Affecting Range Strong cloud certifications (GCP, Azure Architect) Deep domain in compliance frameworks (PCI, SOX, GLBA) Hands-on vs. purely strategic About the Role We are seeking a highly experienced Senior Data Architect to lead the architecture and governance of our fintech data platforms, spanning Google Cloud Platform (GCP) for real-time production systems and Azure for regulatory and business reporting. This role is critical to building secure, governed, and scalable data lakes that support both operational finance systems and strategic analytics. You will be responsible for designing robust data architectures that ingest, process, and govern both structured data (e.g., transactions, accounts, ledgers) and unstructured data (e.g., scanned documents, KYC images, PDFs, voice logs)—ensuring compliance with financial regulations and enabling insights across the organization. Key Responsibilities Data Lake & Architecture Strategy Architect and maintain GCP-based production data lakes for real-time transactional ingestion and processing (e.g., payment processing, KYC, fraud detection). Design Azure-based reporting data lakes for BI, regulatory, and financial reporting workloads (e.g., ledger audits, compliance reports). Build multi-zone lake structures (raw, refined, curated) across both clouds, incorporating schema evolution, data contracts, and role-based access control. Financial Data Modeling & Pipeline Design Model financial datasets (ledger data, user profiles, transactions, pricing) using dimensional, normalized, and vault approaches. Build and optimize real-time and batch pipelines with GCP (BigQuery, Pub/Sub, Dataflow) and Azure (Data Factory, Synapse, ADLS Gen2). Enable unified analytics on structured data (MySQL) and unstructured content (OCR’d documents, audio transcripts, logs). Compliance, Governance & Risk Controls Implement data access, retention, and classification policies that meet regulatory requirements (GLBA, PCI-DSS, SOX, GDPR). Collaborate with infosec, legal, and audit teams to ensure auditability and lineage tracking across data flows. Define controls for PII, financial data sensitivity, and third-party data sharing. Cross-Functional Enablement Serve as a technical advisor to business and compliance teams for data design and provisioning. Mentor data engineers and analysts on financial data structures, accuracy, and business rules. Help define enterprise standards for metadata, data cataloging, and data quality monitoring using tools like Azure Purview and GCP Data Catalog. Required Qualifications 8+ years in data architecture, with significant experience in financial services, fintech, or banking environments. Strong experience with Google Cloud Platform (BigQuery, Dataflow, Cloud Storage, Pub/Sub) and Azure Data Lake / Synapse Analytics. Deep understanding of financial data modeling, including ledgers, double-entry accounting, payment rails, and regulatory audit structures. Experience with batch and streaming architectures, including handling of high-velocity financial transactions. Proficient in SQL and at least one programming language (Python, Scala, or Java). Strong understanding of data compliance frameworks, particularly in regulated financial environments. Preferred Qualifications Prior experience with data lakehouse design using Delta Lake, Iceberg, or BigLake. Experience integrating data platforms with BI/reporting tools like Power BI, Looker, Tableau, or internal compliance dashboards. Familiarity with fraud analytics, anti-money laundering (AML) data flows, or KYC enrichment pipelines. Python programming, Web scraping, API integration, Data analysis, Machine learning, and Linux. What Success Looks Like A resilient, compliant, and scalable data foundation that enables accurate financial operations and reporting. Efficient, observable data pipelines with proactive data quality monitoring and failure alerting. High trust in the reporting data lake from internal audit, compliance, and executive stakeholders. A streamlined data access and provisioning process that supports agility while meeting governance requirements.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Join our dynamic Workforce Planning (WFP) team within Consumer and Community (CCB) Operations division and be part of a forward-thinking organization that leverages data science to optimize workforce efficiency. Contribute to innovative projects that drive impactful solutions for Chase's operations. As an Operations Research Analyst within the WFP Data Science team, you will tackle complex and high-impact projects. Your responsibilities will include designing and developing optimization models and simulation models, supporting OR projects either individually or as part of a team, and collaborating with stakeholders to understand business requirements and define solution objectives clearly. It is crucial to identify and select the correct method to solve problems while staying up to date on the latest OR methodologies and ensuring the robustness of any mathematical solution. Additionally, you will be expected to develop and communicate recommendations and OR solutions in an easy-to-understand manner, leveraging data to tell a story. Your role will involve leading and persuading others positively to influence team efforts and help frame business problems into technical problems with feasible solutions. The ideal candidate should possess a Master's Degree with 4+ years or a Doctorate (PhD) with 2+ years of experience in Operations Research, Industrial Engineering, Systems Engineering, Financial Engineering, Management Science, or related disciplines. You should have experience supporting OR projects with multiple team members, hands-on experience developing simulation models, optimization models, and/or heuristics, a deep understanding of the mathematics and theory behind Operations Research techniques, and proficiency in Open Source Software (OSS) programming languages like Python, R, or Scala. Experience with commercial solvers like GUROBI, CPLEX, XPRESS, or MOSEK, as well as familiarity with basic data table operations (SQL, Hive, etc.), is required. Demonstrated relationship-building skills and the ability to make things happen through positive influence are essential. Preferred qualifications include advanced expertise with Operations Research techniques, prior experience building Reinforcement Learning Models, extensive knowledge of Stochastic Modelling, and previous experience leading highly complex cross-functional technical projects with multiple stakeholders.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics is on leveraging data to drive insights and make informed business decisions. Advanced analytics techniques are utilized to help clients optimize operations and achieve strategic goals. As a Data Analyst at PwC, the emphasis is on utilizing advanced analytical techniques to extract insights from large datasets and facilitate data-driven decision-making. Skills in data manipulation, visualization, and statistical modeling are leveraged to assist clients in solving complex business problems. Candidates with 4+ years of hands-on experience are preferred for the role of GenAI Data Scientist at PwC US - Acceleration Center. The Senior Associate level position requires a highly skilled individual with a strong background in data science, particularly focusing on GenAI technologies. The ideal candidate should possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Responsibilities include collaborating with product, engineering, and domain experts to identify high-impact GenAI opportunities, designing and building GenAI and Agentic AI solutions end-to-end, processing structured and unstructured data for LLM workflows, validating and evaluating models, containerizing and deploying production workloads, communicating findings and insights, and staying current with GenAI advancements. Requirements for this position include a Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field, 1-2 years of hands-on experience delivering GenAI solutions, proficiency in Python with additional experience in R or Scala, experience with vector stores and search technologies, expertise in data preprocessing, feature engineering, and statistical experimentation, competence with cloud services across Azure, AWS, or Google Cloud, proficiency in data visualization, and strong problem-solving skills. Nice to have skills include relevant certifications in GenAI tools and technologies, hands-on experience with leading agent orchestration platforms, proven experience in chatbot design and development, practical knowledge of ML/DL frameworks, and proficiency in object-oriented programming with languages such as Java, C++, or C#. Candidates with a background in BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA or any related degree are encouraged to apply for this challenging role at PwC US - Acceleration Center.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

Do you want to work on complex and pressing challenges, the kind that bring together curious, ambitious, and determined leaders who strive to become better every day If this sounds like you, you've come to the right place. You will be a core member of Periscope's technology team with responsibilities that range from developing and implementing our core enterprise products to ensuring that McKinsey's craft stays on the leading edge of technology. In this role, you will be involved in leading software development projects in a hands-on manner. You will spend about 70% of your time writing and reviewing code and creating software designs. Your expertise will expand into database design, core middle-tier modules, performance tuning, cloud technologies, DevOps, and continuous delivery domains over time. You will be an active learner, tinkering with new open-source libraries, using unfamiliar technologies without supervision, and learning frameworks and approaches. You will have a strong understanding of key agile engineering practices to guide teams on improvement opportunities in their engineering practices. You will provide ongoing coaching and mentoring to the developers to improve our organizational capability. You will be based in our Bengaluru or Gurugram office as part of our Growth, Marketing & Sales team. You'll be aligned primarily with Periscope's technology team. Periscope By McKinsey enables better commercial decisions by uncovering actionable insights. The Periscope platform combines world-leading intellectual property, prescriptive analytics, and cloud-based tools to provide more than 25 solutions focused on insights and marketing, with expert support and training. It is a unique combination that drives revenue growth both now and in the future. Customer experience, performance, pricing, category, and sales optimization are powered by the Periscope platform. Periscope has a presence in 26 locations across 16 countries with a team of 1000+ business and IT professionals and a network of 300+ experts. Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high-performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues at all levels will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won't find anywhere else. When you join us, you will have continuous learning opportunities, a voice that matters, be part of a global community, and receive world-class benefits. Your qualifications and skills should include a degree in computer science or a related field, 6+ years" experience in software development, proficiency in Scala, React.js, relational and NoSQL databases, cloud infrastructure, container technologies, modern engineering practices, Agile methodology, performance optimization tools, excellent analytical and problem-solving skills, customer service focus, and the ability to work effectively under pressure and in diverse team settings. Prior experience leading a small team is advantageous.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this position should possess a strong expertise in programming/scripting languages and a proven ability to debug challenges across various Operating Systems. A certification in the relevant specialization is required along with a proficiency in using designing and automation tools. In addition, the candidate should have excellent knowledge of CI and agile frameworks. Moreover, the successful candidate must demonstrate strong communication, negotiation, networking, and influencing skills. Stakeholder management and conflict management skills are also essential for this role. The candidate should be proficient in setting up tools/infrastructure, defect metrics, and traceability metrics. A solid understanding of CI practices and agile frameworks is necessary. Furthermore, the candidate should be able to promote a strategic mindset to ensure the use of the right tools and coach and mentor the team to follow best practices. Expertise in Big Data and Hadoop ecosystems is required, along with the ability to build real-time stream-processing systems on large Scala data. Proficiency in data ingestion frameworks/data sources and data structures is also crucial for this role. The profile required for this position includes 10+ years of expertise and hands-on experience in Spark with Scala and Big data technologies. The candidate should have a good working experience in Scala and object-oriented concepts, as well as in HDFS, Spark, Hive, and Oozie. Technical expertise with data models, data mining, and partitioning techniques is also necessary. Additionally, hands-on experience with SQL databases and a good understanding of CI/CD tools such as Maven, Git, Jenkins, and SONAR are required. Knowledge of Kafka and ELK stack is a plus, and familiarity with data visualization tools like PowerBI will be an added advantage. Strong communication and coordination skills with multiple stakeholders are essential, along with the ability to assess existing situations, propose improvements, and follow up on action plans. In conclusion, the ideal candidate should have a professional attitude, be self-motivated, a fast learner, and a team player. The ability to work in international/intercultural environments and interact with onsite stakeholders is crucial for this role. If you are looking to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis, and develop or strengthen your expertise, you will find a perfect fit in this position.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Manager, Software Engineering - Data Engineering and Data Science As a Manager of Software Engineering with a focus on Data Engineering and Data Science, you will be instrumental in shaping and expanding our Authentication Program. Your primary role will involve ensuring the integrity and excellence of our data and enabling our teams to operate efficiently. Leading a team of skilled engineers, you will drive innovation, oversee the successful execution of impactful projects, and contribute significantly to our organization's growth and success. Your responsibilities will include: - Leading the development and deployment of scalable data engineering and data science solutions to support the Authentication Program. - Ensuring the accuracy, quality, and reliability of data across all systems and processes. - Mentoring and providing technical guidance to a team of engineers and data scientists. - Making strategic decisions and delivering innovative solutions in collaboration with cross-functional teams. - Collaborating with product stakeholders to prioritize initiatives and align them with business objectives. - Establishing and managing data pipelines to ensure efficient and accurate data processing. - Implementing and advocating best practices in data engineering, data science, and software development. - Automating and streamlining data processing workflows and development processes. - Conducting Proof of Concepts (POCs) to assess and introduce new technologies. - Participating in Agile ceremonies, contributing to team prioritization and planning. - Developing and presenting roadmaps and proposals to Senior Management and stakeholders. - Cultivating a culture of continuous improvement and excellence within the team. Qualifications: Technical Expertise: - Proficiency in Data Engineering, Data Science, or related areas. - Competence in programming languages like Python, Java, or Scala. - Hands-on experience with data processing frameworks. - Knowledge of data warehousing solutions. - Proficiency in data modeling, ETL processes, and data pipeline orchestration. - Familiarity with machine learning frameworks and libraries. - Understanding of secure coding practices and data privacy regulations. Leadership and Communication: - Demonstrated leadership in managing technical teams. - Strong problem-solving and decision-making abilities. - Excellent written and verbal communication skills. - Effective collaboration with cross-functional teams and stakeholders. - Experience in Agile methodologies and project management. Preferred Qualifications: - Degree in Computer Science, Data Science, Engineering, or related fields. - Experience with streaming data platforms. - Familiarity with data visualization tools. - Experience with CI/CD pipelines and DevOps practices. If you require accommodations or assistance during the application process or recruitment in the US or Canada, please contact reasonable_accommodation@mastercard.com. Join us in a culture that values innovation, collaboration, and excellence. Apply now to contribute to shaping the future of our Authentication Program and maintaining the highest quality of our data.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The ideal candidate for the Big Data Engineer role should have 3-6 years of experience and be located in Hyderabad. You should possess strong skills in Spark, Python/Scala, AWS/Azure, Snowflake, Databricks, SQL Server/NoSQL. As a Big Data Engineer, your main responsibilities will include designing and implementing data pipelines for both batch and real-time data processing. You will need to optimize data storage solutions for efficiency and scalability, collaborate with analysts and business teams to meet data requirements, monitor data pipeline performance, and troubleshoot any issues that arise. It is crucial to ensure compliance with data security and privacy policies. The required skills for this role include proficiency in Python, SQL, and ETL frameworks, experience with big data tools such as Spark and Hadoop, a strong knowledge of cloud services and databases, as well as familiarity with data modeling and warehousing concepts.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Content Strategist at our company based in Banglore, KA, India, you will play a crucial role in developing user-friendly, SEO-optimized content for a diverse audience. Your responsibilities will include creating engaging visual presentations and videos, collaborating with subject matter experts, and updating existing content. You will also be tasked with identifying impactful video topics for our YouTube channel to drive revenue. To excel in this role, you must possess strong written and verbal communication skills in English and have a knack for simplifying complex concepts. Your expertise in content strategy and marketing will be essential in driving our marketing efforts and engaging with our YouTube community. Proficiency in YouTube analytics, Content Management Systems, and SEO tools is preferred. In addition to your content creation duties, you will be responsible for analyzing performance metrics, managing multiple projects, and ensuring quality and design consistency. Your passion for technology, including Cloud, DevOps, Data Science, and programming languages such as Java, Python, Scala, or Web Development, will be advantageous. This position offers the opportunity to work from our office with a standard 5-day work week. If you are a creative thinker with a commitment to innovation and quality, we invite you to join our team and contribute to the growth of our content marketing channels.,

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 22 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Primary skills: Pyspark/Hadoop/Scala NP : immediate to 60 Days

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Primary skills: Spark,SQL Spark with Java/Saprk with Scala NP : immediate to 60 Days

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Summary As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities for Data Platforms and Data Engineering Community. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Serves as the Subject Matter Expert on Data & Analytics Solutions. Accountable for delivering high quality, data products and analytic ready data solutions. Develop and maintain ETL/ELT pipelines for ingesting data from various sources into our data warehouse. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing. Implement and maintain security protocols to protect sensitive data. Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Accountable for evaluating Data enhancements and initiatives, assessing capacity and prioritization along with onshore and vendor teams. Knowledgeable in evolving trends in Data platforms and Product based implementation Manage and provide guidance for the data engineers supporting projects, enhancements, and break/fix efforts. Has end-to-end ownership mindset in driving initiatives through completion Comfortable working in a fast-paced environment with minimal oversight Mentors and provide career guidance to other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Provides strategic feedback to vendors on service delivery and balances workload with vendor teams. Qualifications & Experience Hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML. Ability to craft and architect data solutions, automation pipelines to productionize solutions. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Glue, Redshift, Athena, lakeformation, etc.). Cloudera Data Platform, Tableau labs is a plus. Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, PySpark, R, PyTorch, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or GCP (Preferably strong in AWS) Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. AWS Data Engineering/Analytics certification is a plus. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Delhi, Delhi

On-site

Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person

Posted 1 week ago

Apply

2.0 - 4.0 years

25 - 30 Lacs

Pune

Work from Office

Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Pune

Work from Office

Senior Data Engineer ? Overview The Enterprise Data Solutions team is looking for a Big Data Engineer to drive our mission to unlock potential of data assets by consistently innovating, eliminating friction in how users access data from its Big Data repositories and enforce standards and principles in the Big Data space. The candidate will be part of an exciting, fast paced environment developing Data Engineering solutions in the data and analytics domain. Role Develop high quality, secure and scalable data pipelines using spark, Scala/ python on Hadoop or object storage. Leverage new technologies and approaches to innovate with increasingly large data sets. Drive automation and efficiency in Data ingestion, data movement and data access workflows by innovation and collaboration. Understand, implement and enforce Software development standards and engineering principles in the Big Data space. Contribute ideas to help ensure that required standards and processes are in place and actively look for opportunities to enhance standards and improve process efficiency. Perform assigned tasks and production incident independently. All About You 6+ years of experience in Data Warehouse related projects in product or service-based organization Expertise in Data Engineering and implementing multiple end-to-end DW projects in Big Data environment Experience of building data pipelines through Spark with Scala/Python/Java on Hadoop or Object storage Experience of building Nifi pipelines Experience of working with Databases like Oracle, Netezza and have strong SQL knowledge Strong analytical skills required for debugging production issues, providing root cause and implementing mitigation plan Strong communication skills - both verbal and written Ability to multi-task across multiple projects, interface with external / internal resources Ability to be high-energy, detail-oriented, proactive and able to function under pressure in an independent environment along with a high degree of initiative and self-motivation to drive results Ability to quickly learn and implement new technologies, and perform POC to explore best solution for the problem statement Flexibility to work as a member of a matrix based diverse and geographically distributed project teams

Posted 1 week ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Senior developer with 5-7 years experience, who has expertise in big data technologies. Experience: 5-7 Years Must Have Skills: Java, Spring boot , Microservices, Hibernate, SQL, Any Cloud Platform (Google Cloud Preferred) Impact Youll Make: Good to Have Skills: Spark, Scala, Google BigQuery, Harness, Docker, Kubernetes This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Sr Developer, Applications Development

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies