Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
About the Role: As a highly motivated and experienced Sr. Principal Support Engineer at Heroku, you will play a crucial role in designing and architecting a robust and scalable Service Cloud solution to support the internal support team. Your responsibilities will involve implementing best practices for support operations, ensuring efficient resolution of issues, and providing exceptional customer support. Your key responsibilities will include designing, developing, and implementing Service Cloud solutions, serving as a subject matter expert on Service Cloud best practices, assisting customers in troubleshooting custom code and integrations, and ensuring customer satisfaction by resolving cases effectively. You will lead the implementation and optimization of Service Cloud features like case management, knowledge base, and customer portals. Additionally, you will develop detailed documentation and knowledge base articles to empower customers and the support team. In this role, you will utilize top-notch troubleshooting techniques and tools to identify and resolve issues, propose test cases, and suggest code changes to fix problems. You will mentor internal support teams on technical issues, build relationships with cross-functional teams, and analyze customer support data to drive continuous improvement of the platform. To qualify for this position, you should hold a Bachelors or Masters degree in Computer Science or a related Engineering field, or have equivalent work experience with demonstrated proficiency. You must have at least 7-8 years of experience in a software development environment, with proven expertise in designing and implementing complex Service Cloud solutions. Additionally, you should possess exceptional debugging, troubleshooting, and problem-solving skills, along with excellent written and verbal communication abilities. Your technical requirements should include extensive experience in building and configuring robust Service Cloud solutions for customer support teams, a deep understanding of Service Cloud features and functionalities, and strong experience with Salesforce APIs and integration technologies. Knowledge of Salesforce architecture, data model, and security is essential, along with proficiency in MVC Framework, Service Cloud Lightning, and JavaScript. Prior experience in Ruby on Rails and Marketing Cloud functionalities would be desirable. This position also requires you to participate in an after-hours on-call rotation to support critical incidents and work in shifts according to IST or EMEA timings. Your willingness to learn new technologies, collaborate with multi-functional teams, and tackle complex problems will be key to success in this role.,
Posted 22 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior Developer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as a Senior Developer you should have experience with: Design and develop distributed data processing pipelines using Scala and Spark. Optimize Spark jobs for performance and scalability. Integrate with data sources like Kafka, HDFS, S3, Hive, and relational databases. Collaborate with data engineers, analysts, and business stakeholders to understand data requirements. Implement data transformation, cleansing, and enrichment logic. Ensure data quality, lineage, and governance. Participate in code reviews, unit testing, and deployment processes. Some Other Highly Valued Skills Include Strong proficiency in Scala, especially functional programming paradigms. Hands-on experience with Apache Spark (RDDs, DataFrames, Datasets). Expertise with Spark batch processing. Knowledge of Big Data ecosystems: Hadoop, Hive, Imapala, Kafka. Experience with data serialization formats like Parquet, AVRO. Understanding of performance tuning in Spark (e.g., partitioning, caching, shuffling). Proficiency in SQL and data modeling. Familiarity with CI/CD tools, version control (Git), and containerization (Docker/Kubernetes). Familiarity with AWS toolset is added advantage You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 23 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Senior Software Engineer at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as a Senior Software Engineer you should have experience with: Strong proficiency in Scala, especially functional programming paradigms. Hands-on experience with Apache Spark (RDDs, DataFrames, Datasets). Expertise with Spark batch processing. Knowledge of Big Data ecosystems: Hadoop, Hive, Imapala, Kafka. Experience with data serialization formats like Parquet, AVRO. Understanding of performance tuning in Spark (e.g., partitioning, caching, shuffling). Proficiency in SQL and data modeling. Familiarity with CI/CD tools, version control (Git), and containerization (Docker/Kubernetes). Familiarity with AWS toolset is added advantage . You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 23 hours ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a senior data analyst at Comscore India, your primary responsibility will be to extract, transform, and analyze data to derive insights and provide answers to questions posed by internal and external clients. You will also engage in analytical hypothesis testing and modeling to offer key insights to stakeholders. Your role will involve supporting the sales function by leveraging your data expertise to conduct feasibility reviews and provide detailed analysis on inquiries raised by various stakeholders. Additionally, you will play a crucial part in creating and innovating Comscore's offerings in the marketplace and will lead cross-functional teams of analysts. In this senior position, you will have the opportunity to work on a variety of projects, present your findings to clients and upper management, and collaborate with cross-functional teams to implement quality assurance methods. You will also participate in sales calls, manage custom research projects, and identify process efficiencies and automation opportunities. To excel in this role, you should have 4-8 years of experience in Data Mining, SQL, Python, PySpark, and Scala. Familiarity with Comscore's offerings and research methods, or equivalent experience in market research, is desirable. Moreover, having project management experience and the ability to partner, influence, and impact others will be beneficial. The regular working hours for this position will typically cover a combination of US and India business hours, from 2pm to 11pm IST. Occasional late hours may be required for meetings with global teams. During the onboarding and training period, you may need to work in US Eastern time hours. Comscore offers a comprehensive benefits package, including medical insurance coverage for employees and their dependents, provident fund, annual leave days, public holidays, sick leave days, paternity leave days, and more. Flexible work arrangements and additional perks such as the Sodexo Meal scheme are also provided. If you are motivated by challenges and interested in shaping the future of media measurement, Comscore offers a dynamic environment where you can make a significant impact. Join us in our mission to simplify the complex and provide valuable insights to our clients and partners. For more information about Comscore, you can visit Comscore.com.,
Posted 23 hours ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Full Stack Data Engineer Lead Analyst at Evernorth, you will be a key player in the Data & Analytics Engineering organization of Cigna, a leading Health Services company. Your role will involve delivering business needs by understanding requirements and deploying software into production. To excel in this position, you should be well-versed in critical technologies, eager to learn, and committed to adding value to the business. Ownership, a thirst for knowledge, and an open mindset are essential attributes for a successful Full Stack Engineer like yourself. In addition to delivery responsibilities, you will be expected to embrace an automation-first and continuous improvement mindset. You will drive the adoption of CI/CD tools and support the enhancement of toolsets and processes. Your ability to articulate clear business objectives aligned with technical specifications and work in an iterative, agile manner will be crucial. Taking ownership and being accountable, writing referenceable and modular code, and ensuring data quality are key behaviors expected from you. Key Characteristics: - Independently design and architect solutions - Demonstrate ownership and accountability - Write referenceable and modular code - Possess fluency in specific areas and proficiency in multiple areas - Exhibit a passion for continuous learning - Maintain a quality mindset to ensure data quality and business impact assessment Required Skills: - Experience in developing data integration and ingestion strategies, including Snowflake cloud data warehouse, AWS S3 buckets, and loading nested JSON formatted data - Strong understanding of snowflake cloud database architecture - Proficiency in big data technologies like Databricks, Hadoop, HiveQL, Spark (Scala/Python) and cloud technologies such as AWS (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR) - Experience in working on Analytical Models and enabling their deployment and production via data and analytical pipelines - Expertise in Query Tuning and Performance improvements - Previous exposure to onsite/offshore setup or model Required Experience & Education: - 8+ years of professional industry experience - Bachelor's degree (or equivalent) - 5+ years of Python scripting experience - 5+ years of Data Management and SQL expertise in Teradata & Snowflake - 3+ years of Agile team experience, preferably with Scrum Desired Experience: - Familiarity with version management tools, with Git being preferred - Exposure to BDD and TDD development methodologies - Experience in an agile CI/CD environment; Jenkins experience is preferred - Knowledge of Health care information domains is advantageous Location & Hours of Work: - (Specify whether the position is remote, hybrid, in-office, and where the role is located as well as the required hours of work) Evernorth is committed to being an Equal Opportunity Employer, actively promoting and supporting diversity, equity, and inclusion efforts throughout the organization. Staff are encouraged to participate in these initiatives to enhance internal practices and external collaborations with diverse client populations.,
Posted 23 hours ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will be responsible for designing, developing, and maintaining scalable data pipelines using Azure Databricks. Your role will involve building and optimizing ETL/ELT processes for structured and unstructured data, collaborating with data scientists, analysts, and business stakeholders, integrating Databricks with Azure Data Lake, Synapse, Data Factory, and Blob Storage, developing real-time data streaming pipelines, and managing data models/data warehouses. Additionally, you will optimize performance, manage resources, ensure cost efficiency, implement best practices for data governance, security, and quality, troubleshoot and improve existing data workflows, contribute to architecture and technology strategy, mentor junior team members, and maintain documentation. To excel in this role, you should have a Bachelor's/Master's degree in Computer Science, IT, or a related field, along with 5+ years of Data Engineering experience (minimum 2+ years with Databricks). Strong expertise in Azure cloud services (Data Lake, Synapse, Data Factory), proficiency in Spark (PySpark/Scala) and big data processing, experience with Delta Lake, Structured Streaming, and real-time pipelines, strong SQL skills, an understanding of data modeling and warehousing, familiarity with DevOps tools like CI/CD, Git, Terraform, Azure DevOps, excellent problem-solving and communication skills are essential. Preferred qualifications include Databricks Certified (Associate/Professional), experience with machine learning workflows on Databricks, knowledge of data governance tools like Purview, experience with REST APIs, Kafka, Event Hubs, cloud performance tuning, and cost optimization experience. Join us to be a part of a supportive and collaborative team, work with a growing company in the exciting BI and Data industry, enjoy a competitive salary and performance-based bonuses, and have opportunities for professional growth and development. If you are interested in this opportunity, please send your resume to hr@exillar.com and fill out the form at https://forms.office.com/r/HdzMNTaagw.,
Posted 1 day ago
10.0 - 15.0 years
0 Lacs
delhi
On-site
As a seasoned data engineering professional with 10+ years of experience, you will lead and mentor a team of data engineers to ensure high performance and career growth. Your primary responsibility will be to architect and optimize scalable data infrastructure, guaranteeing high availability and reliability. Additionally, you will drive the development and implementation of data governance frameworks and best practices, collaborating closely with cross-functional teams to define and execute a data roadmap. Your expertise in backend development using languages like Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS will be crucial. Proficiency in SQL, Python, and Scala for data processing and analytics is a must. In-depth knowledge of cloud platforms such as AWS, GCP, or Azure is required, along with hands-on experience in big data technologies like Spark, Hadoop, Kafka, and distributed computing frameworks. You will be responsible for ensuring data security, compliance, and quality across all data platforms while optimizing data processing workflows for performance and cost efficiency. A strong foundation in High-Level Design (HLD) and Low-Level Design (LLD), as well as design patterns, preferably using Spring Boot or Google Guice, is necessary. Experience with data warehousing solutions like Snowflake, Redshift, or BigQuery will be beneficial. Your role will also involve working with NoSQL databases such as Redis, Cassandra, MongoDB, and TiDB, as well as familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy aligned with business objectives, strong leadership, communication, and stakeholder management skills are essential for this position. Candidates from Tier 1 colleges/universities with a background in product startups and experience in implementing Data Engineering systems from an early stage in the company are preferred. Additionally, experience in machine learning infrastructure or MLOps, exposure to real-time data processing and analytics, and interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture will be advantageous. Prior experience in a SaaS or high-growth tech company will be a plus. If you are a highly skilled data engineer with a passion for innovation and technical excellence, we invite you to apply for this challenging and rewarding opportunity.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a highly skilled and experienced Senior Data Scientist with a strong background in Artificial Intelligence (AI) and Machine Learning (ML). You will be joining our team as an innovative, analytical, and collaborative team player with a proven track record in end-to-end AI/ML project delivery. This includes expertise in data processing, modeling, and model deployment. With a minimum of 5-7 years of experience in data science, your focus will be on AI/ML applications. Your technical skills will include proficiency in a wide range of ML algorithms such as regression, classification, clustering, decision trees, neural networks, and deep learning architectures (e.g., CNNs, RNNs, GANs). Strong programming skills in Python, R, or Scala are required, along with experience in ML libraries like TensorFlow, PyTorch, and Scikit-Learn. You should have experience in data wrangling, cleaning, and feature engineering, with familiarity in SQL and data processing frameworks like Apache Spark. Model deployment using tools like Docker, Kubernetes, and cloud services (AWS, GCP, or Azure) should be part of your skill set. A strong foundational knowledge in statistics, probability, and mathematical concepts used in AI/ML is essential, along with proficiency in data visualization tools such as Tableau, Power BI, or matplotlib. Preferred qualifications include familiarity with big data tools like Hadoop, Hive, and distributed computing. Hands-on experience in NLP techniques like text mining, sentiment analysis, and transformers is a plus. Expertise in analyzing and forecasting time-series data, as well as familiarity with CI/CD pipelines for ML, model versioning, and performance monitoring, are also preferred. Leadership skills such as leading cross-functional project teams or managing data science projects in a production setting are valued. Your personal attributes should include problem-solving skills to break down complex problems and design innovative, data-driven solutions. Strong written and verbal communication skills are necessary to convey technical insights clearly to diverse audiences. A keen interest in staying updated with the latest advancements in AI and ML, along with the ability to quickly learn and implement new technologies, is expected from you.,
Posted 1 day ago
5.0 years
0 Lacs
Haryana, India
On-site
Job Description About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Job Description Summary Data Scientist with deep expertise in modern AI/ML technologies to join our innovative team. This role combines cutting-edge research in machine learning, deep learning, and generative AI with practical full-stack cloud development skills. You will be responsible for architecting and implementing end-to-end AI solutions, from data engineering pipelines to production-ready applications leveraging the latest in agentic AI and large language models. Job Description Key Responsibilities AI/ML Development & Research Design, develop, and deploy advanced machine learning and deep learning models for complex business problems Implement and optimize Large Language Models (LLMs) and Generative AI solutions Build agentic AI systems with autonomous decision-making capabilities Conduct research on emerging AI technologies and their practical applications Perform model evaluation, validation, and continuous improvement Cloud Infrastructure & Full-Stack Development Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP Develop full-stack applications integrating AI models with modern web technologies Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.) Implement CI/CD pipelines for ML model deployment and monitoring Design and optimize cloud infrastructure for high-performance computing workloads Data Engineering & Database Management Design and implement data pipelines for large-scale data processing Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.) Optimize database performance for ML workloads and real-time applications Implement data governance and quality assurance frameworks Handle streaming data processing and real-time analytics Leadership & Collaboration Mentor junior data scientists and guide technical decision-making Collaborate with cross-functional teams including product, engineering, and business stakeholders Present findings and recommendations to technical and non-technical audiences Lead proof-of-concept projects and innovation initiatives Required Qualifications Education & Experience Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field 5+ years of hands-on experience in data science and machine learning 3+ years of experience with deep learning frameworks and neural networks 2+ years of experience with cloud platforms and full-stack development Technical Skills - Core AI/ML Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents Technical Skills - Development & Infrastructure Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB) Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks Big Data: Spark, Hadoop, Kafka, streaming data processing Preferred Qualifications Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma) Knowledge of LangChain, LlamaIndex, or similar LLM frameworks Experience with model compression and edge deployment Familiarity with distributed computing and parallel processing Experience with computer vision and NLP applications Knowledge of federated learning and privacy-preserving ML Experience with quantum machine learning Expertise in MLOps and production ML system design Key Competencies Technical Excellence Strong mathematical foundation in statistics, linear algebra, and optimization Ability to implement algorithms from research papers Experience with model interpretability and explainable AI Knowledge of ethical AI and bias detection/mitigation Problem-Solving & Innovation Strong analytical and critical thinking skills Ability to translate business requirements into technical solutions Creative approach to solving complex, ambiguous problems Experience with rapid prototyping and experimentation Communication & Leadership Excellent written and verbal communication skills Ability to explain complex technical concepts to diverse audiences Strong project management and organizational skills Experience mentoring and leading technical teams How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ . TaskUs is proud to be an equal opportunity workplace and is an affirmative action employer. We celebrate and support diversity; we are committed to creating an inclusive environment for all employees. TaskUs people first culture thrives on it for the benefit of our employees, our clients, our services, and our community. Req Id: R_2507_10290_0 Posted At: Thu Jul 31 2025 00:00:00 GMT+0000 (Coordinated Universal Time)
Posted 1 day ago
0 years
0 Lacs
India
Remote
Title: Data Engineer Location: 100% remote Contract Immediate Joiner: Within 1 to 2 weeks Pyspark – Must needed. Python (be able to code from day 1) and Scala (Just read the code is enough) DataBricks (Strong) Kafka – Worked on real time streaming professional set-up Deltalake Preferred Unity Catalog (Table structure) or IceBerg .
Posted 1 day ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some of the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the “Intelligent Edge” – and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what’s next for you. How You Will Make Your Mark… The ideal candidate will have experience working with AI technologies including LLMs/GenAI, and application development with to build and deploy AI Chat bot to support business management. Experience with MS Power Platform, Java and Databricks are preferred. Responsibilities What you’ll do: As a Sr. AI Developer, the primary responsibility will be on full-stack development of AI Chat bot application for business management, integrating business-relevant data with LLMs, and helping the team deliver incremental features for on-demand AI-assisted analytics services on a hybrid tech stack. Translate business requirements into scalable and performant technical solutions. Design, code, test, and assure the quality of complex AI-powered product features. Partner with a highly motivated and talented set of colleagues. Be a motivated, self-starter who can operate with minimal handholding. Collaborate across teams and time zones, demonstrating flexibility and accountability. Education And Experience Required 8-10+ years of Data Engineering & AI Development experience, with significant exposure to building AI Chat bots on a hybrid tech stack across SQL Server, Hadoop, Azure Data Factory and Databricks. Advanced university degree (e.g., Masters) or demonstrable equivalent. What You Need To Bring Knowledge and Skills: Demonstrated ability to build or integrate AI-driven features into enterprise applications. Strong knowledge of Computer Science fundamentals. Experience with SQL databases and building SSIS packages; knowledge of NoSQL and event streaming (e.g., Kafka) is a bonus. Experience working with LLMs and generative AI frameworks (e.g., OpenAI, Hugging Face, etc.). Proficiency in MS Power Platform, Java, Scala, Python experience preferred. Experience with SAP software (e.g., SAP S/4HANA, SAP BW) is an asset. Proven track record of writing production-grade code for enterprise-scale systems. Knowledge of Agentic AI and frameworks Strong collaboration and communication skills. Experience using tools like JIRA for tracking tasks and bugs, with Agile CI/CD workflows. Strong domain experience across Sales, Finance or Operations with deep understanding of key KPIs & Metrics. Collaborates with senior managers/directors of the business on AI Chat bot, BI, Data Science and Analytics roadmap. Owns business requirements, prioritization & execution to deliver actionable insights to enable decision making, support strategic initiatives and accelerate profitable growth. Functions as the subject matter expert for data, analytics, and reporting systems within the organization to yield accurate and proper interpretation of core business KPIs/metrics. Performing deep-dive investigations, including applying advanced techniques, to solve some of the most critical and complex business problems in support of business transformation to enable Product, Support, and Software as a Service offerings. Additional Skills Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Decisions, Business Development, Business Metrics, Business Performance, Business Strategies, Calendar Management, Coaching, Computer Literacy, Creativity, Critical Thinking, Cross-Functional Teamwork, Design Thinking, Empathy, Follow-Through, Growth Mindset, Intellectual Curiosity (Inactive), Leadership, Long Term Planning, Managing Ambiguity, Personal Initiative {+ 5 more} What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #aruba Job Business Planning Job Level Expert HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This is a data engineer position - a programmer responsible for the design, development implementation and maintenance of data flow channels and data processing systems that support the collection, storage, batch and real-time processing, and analysis of information in a scalable, repeatable, and secure manner in coordination with the Data & Analytics team. The overall objective is defining optimal solutions to data collection, processing, and warehousing. Must be a Spark Java development expertise in big data processing, Python and Apache spark particularly within banking & finance domain. He/She designs, codes and tests data systems and works on implementing those into the internal infrastructure. Responsibilities: Ensuring high quality software development, with complete documentation and traceability Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance Ensure efficient data storage and retrieval using Big Data Implement best practices for spark performance tuning including partition, caching and memory management Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins) Work on batch processing frameworks for Market risk analytics Promoting unit/functional testing and code inspection processes Work with business stakeholders and Business Analysts to understand the requirements Work with other data scientists to understand and interpret complex datasets Qualifications: 5- 8 Years of experience in working in data eco systems. 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks. 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning) Experienced in working with large and multiple datasets and data warehouses Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets. Strong analytic skills and experience working with unstructured datasets Ability to effectively use complex analytical, interpretive, and problem-solving techniques Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira Experience with external cloud platform such as OpenShift, AWS & GCP Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos) Experienced in integrating search solution with middleware & distributed messaging - Kafka Highly effective interpersonal and communication skills with tech/non-tech stakeholders. Experienced in software development life cycle and good problem-solving skills. Excellent problem-solving skills and strong mathematical and analytical mindset Ability to work in a fast-paced financial environment Education: Bachelor’s/University degree or equivalent experience in computer science, engineering, or similar domain ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Architecture ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 day ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Who We Are Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some of the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the “Intelligent Edge” – and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what’s next for you. How You Will Make Your Mark… The ideal candidate will have experience with deploying and managing enterprise-scale Data Governance practices along with Data Engineering experience developing the database layer to support and enable AI initiatives as well as streamlined user experience with Data Discovery, Security & Access Control, for meaningful & business-relevant analytics. The candidate will be comfortable with the full stack analytics ecosystem, with Database layer, BI dashboards, and AI/Data Science models & solutions, to effectively define and implement a scalable Data Governance practice. What You’ll Do Responsibilities: Drive the design and development of Data Dictionary, Lineage, Data Quality, Security & Access Control for Business-relevant data subjects & reports across business domains. Engage with the business users community to enable ease of Data Discovery and build trust in the data through Data Quality & Reliability monitoring with key metrics & SLAs defined. Supports the development and sustaining of Data subjects in the Database layer to enable BI dashboards and AI solutions. Drives the engagement and alignment with the HPE IT/CDO team on Governance initiatives, including partnering with functional teams across the business. Test, validate and assure the quality of complex AI-powered product features. Partner with a highly motivated and talented set of colleagues. Be a motivated, self-starter who can operate with minimal handholding. Collaborate across teams and time zones, demonstrating flexibility and accountability Education And Experience Required 7+ years of Data Governance and Data Engineering experience, with significant exposure to enabling Data availability, data discovery, quality & reliability, with appropriate security & access controls in enterprise-scale ecosystem. First level university degree. What You Need To Bring Knowledge and Skills: Experience working with Data governance & metadata management tools (Collibra, Databricks Unity Catalog, Atlan, etc.). Subject matter expertise of consent management concepts and tools. Demonstrated knowledge of research methodology and the ability to manage complex data requests. Excellent analytical thinking, technical analysis, and data manipulation skills. Proven track record of development of SQL SSIS packages with ETL flow. Experience with AI application deployment governance a plus. Technologies such as MS SQL Server, Databricks, Hadoop, SAP S4/HANA. Experience with SQL databases and building SSIS packages; knowledge of NoSQL and event streaming (e.g., Kafka) is a bonus. Exceptional interpersonal skills and written communication skills. Experience and comfort solving problems in an ambiguous environment where there is constant change. Ability to think logically, communicate clearly, and be well organized. Strong knowledge of Computer Science fundamentals. Experience working with LLMs and generative AI frameworks (e.g., OpenAI, Hugging Face, etc.). Proficiency in MS Power Platform, Java, Scala, Python experience preferred. Strong collaboration and communication skills. Performing deep-dive investigations, including applying advanced techniques, to solve some of the most critical and complex business problems in support of business transformation to enable Product, Support, and Software as a Service offerings. Strong business acumen and technical knowledge within area of responsibility. Strong project management skills Additional Skills Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Decisions, Business Development, Business Metrics, Business Performance, Business Strategies, Calendar Management, Coaching, Computer Literacy, Creativity, Critical Thinking, Cross-Functional Teamwork, Design Thinking, Empathy, Follow-Through, Growth Mindset, Intellectual Curiosity (Inactive), Leadership, Long Term Planning, Managing Ambiguity, Personal Initiative {+ 5 more} What We Can Offer You Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #aruba Job Business Planning Job Level Specialist HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 day ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Apache Spark Good to have skills : Java, Scala, PySpark Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in analyzing requirements, proposing solutions, and ensuring that the data platform aligns with organizational goals and standards. Your role will require you to stay updated with industry trends and best practices to contribute effectively to the team. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Engage in continuous learning to stay abreast of emerging technologies and methodologies. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with Java, Scala, PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with data integration tools and techniques. - Familiarity with cloud platforms and services related to data engineering. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Kolkata office. - A 15 years full time education is required., 15 years full time education
Posted 1 day ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description At Boeing, we innovate and collaborate to make the world a better place. We’re committed to fostering an environment for every teammate that’s welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. Overview As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products and space systems for customers in more than 150 countries. As a top U.S. exporter, the company leverages the talents of a global supplier base to advance economic opportunity, sustainability and community impact. Boeing’s team is committed to innovating for the future, leading with sustainability, and cultivating a culture based on the company’s core values of safety, quality and integrity. Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts – enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping people’s careers and being thoughtful about employee wellbeing. With us, you can create and contribute to what matters most in your career, community, country, and world. Join us in powering the progress of global aerospace. Boeing India IT Product Systems team is currently looking for an Associate Software Developer - Java full stack to join them in their team in Bangalore, India. This role will be based out of Bangalore, India . Position Responsibilities: Understands and develops software solutions to meet end user requirements. Ensures that application integrates with overall system architecture, utilizing standard IT lifecycle methodologies and tools. Develops algorithms, data and process models, plans interfaces and writes interface control documents for use in construction of solutions of moderate complexity. Employer will not sponsor applicants for employment visa status. Basic Qualifications (Required Skills/Experience): 2+ years of relevant experience in IT industry Experience in designing and implementing idiomatic RESTful APIs using the Spring framework (v6.0+) with Spring Boot (v3.0+) and Spring Security (v6.0+) in Java (v17+). Experience with additional languages (Scala/Kotlin/others) preferred. Working experience with RDBM Systems, basic SQL scripting and querying, specifically with SQL Server (2018+) and Teradata (v17+). Additional knowledge of schema / modelling / querying optimization preferred. Experience with Typescript (v5+), JavaScript (ES6+), Angular (v15+), Material UI, AmCharts (v5+) Experience working with ALM tools (Git, Gradle, SonarQube, Coverity, Docker, Kubernetes) driven by tests (JUnit, Mockito, Hamcrest etc.) Experience in shell scripting (Bash/Sh), CI/CD processes and tools (GitLab CI/similar) OCI containers (Docker/Podman/Buildah etc.) Data analysis and engineering experience with Apache Spark (v3+) in Scala, Apache Iceberg / Parquet etc. Experience with Trino/Presto is a bonus. Familiarity with GCP / Azure (VMs, container runtimes, BLOB storage solutions) preferred but not mandatory. Preferred Qualifications (Desired Skills/Experience) : A Bachelor’s degree or higher is preferred Strong backend experience (Java/Scala/Kotlin etc.) with basic data analysis/engineering experience (Spark/Parquet etc.) OR basic backend experience (Java/Scala etc.) with strong data analysis/engineering experience (Spark/Parquet etc.) OR Moderate backend experience (Java/Kotlin etc.) with Strong Frontend experience (Angular 15+ with SASS / Angular Material) and exposure to DevOps pipelines (GitLab CI) Typical Education & Experience: Bachelor's Degree with typically 2 to 5 years of experience OR Master's Degree with typically 1 to 2 years of experience is preferred but not required Relocation: This position does offer relocation within INDIA. Applications for this position will be accepted until Aug. 09, 2025 Export Control Requirements: This is not an Export Control position. Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India) Equal Opportunity Employer: We are an equal opportunity employer. We do not accept unlawful discrimination in our recruitment or employment practices on any grounds including but not limited to; race, color, ethnicity, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military and veteran status, or other characteristics covered by applicable law. We have teams in more than 65 countries, and each person plays a role in helping us become one of the world’s most innovative, diverse and inclusive companies. We are proud members of the Valuable 500 and welcome applications from candidates with disabilities. Applicants are encouraged to share with our recruitment team any accommodations required during the recruitment process. Accommodations may include but are not limited to: conducting interviews in accessible locations that accommodate mobility needs, encouraging candidates to bring and use any existing assistive technology such as screen readers and offering flexible interview formats such as virtual or phone interviews.
Posted 1 day ago
4.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Join our Team About This Opportunity The support engineer is a member of a team with high skills in supporting customers, 1st line engineers and 3rd part in advanced troubleshooting, fault isolation and remediation, to secure availability and fast resolution of the OSS/BSS product. You will be surrounded by people that are smart, passionate about cloud computing, and believe that world class support is critical to customer success. Every day will bring new and exciting challenges on the job while you: Learn and use groundbreaking technologies Apply advanced troubleshooting techniques to provide unique solutions to our customers' individual needs Interact with leading technologists around the world Work directly with Ericsson Product Development team to help reproduce and resolve customer issues Leverage your extensive customer support experience to provide feedback to internal Ericsson teams on how our customers use our services Drive customer communication during critical events What you will do Purpose - We are here to solve product issues for our customer by improving our product long term Proactivity in all we do! Role responsibilities Deal with customer support requests according to the defined process Provide support in detailed technical queries and solutions to source code level problems Create and conclude trouble reports and update it with recommended solutions towards the Design Maintenance team when identifying SW bugs. Be part of 24/7 emergency duty and support on critical cases Collect customer feedback and submit it to the R&D program to continue improving the product Continuously update the knowledge base and share knowledge within the organization Participate in FFI (First feature introduction) activities Provide on-site support when needed Be part of serviceability and service preparation activities. Required Skills Documented and proven knowledge in Cloud Native concepts, docker, Kubernetes, aws, Azure, GCP The awareness of product security, privacy and risk assessment Deep competence in troubleshooting and fault isolation using tools in complex it/telco systems Understanding, analyzing and troubleshooting code. Experience of scripting like bash, python, perl, ansible, cassandra scala (preferred) Ability to maintain a professional communication with customers/local companies, especially in critical situations Composure and readiness to work under high pressure from our customers/local companies while providing support You will bring Minimum of 4-10 years’ experience running services on Linux, Technical Support, Emergency Handling, customer ticket handling/ request handling. To qualify the candidate should have demonstrated key traits required - A very strong customer focus Ability to juggle many tasks and projects in a fast-moving environment Be a self-starter who is excited about technology. Good time management and multi-tasking capabilities Good teammate who is also comfortable working on own initiative Flexibility with working hours Familiarity with general business terms and processes Innovative & creative approach to problem solving coupled with advanced diagnostic & technical analysis skills Values of Perseverance, Professionalism, Respect & working with Integrity Education B Tech, M Tech, or similar experience in relevant area (SW development, telco business) Minimum 4 years of working experience in the Telecom (mandatory) area. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Gurgaon Req ID: 770584
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Software design, Scala & Spark development, automated testing of new and existing components in an Agile, DevOps and dynamic environment Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process – through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. Peer code reviews Requirements To be successful in this role, you should meet the following requirements: Scala development and design using Scala 2.10+ or Java development and design using Java 1.8+. Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services) Sound knowledge on working Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects; Experience with time-series/analytics dB’s such as Elastic search. Experience with scheduling tools such as Airflow, Control-M. Understanding or experience of Cloud design patterns Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Experience with developing Hive QL, UDF’s for analysing semi structured/structured datasets Location : Pune and Bangalore You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented and motivated Data Engineer to join our growing data team. You will play a key role in building scalable data pipelines, optimizing data infrastructure, and enabling data-driven solutions. Primary Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines for batch and real-time data processing Build and optimize data models and data warehouses to support analytics and reporting Collaborate with analysts and software engineers to deliver high-quality data solutions Ensure data quality, integrity, and security across all systems Monitor and troubleshoot data pipelines and infrastructure for performance and reliability Contribute to internal tools and frameworks to improve data engineering workflows Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of experience working on commercially available software and / or healthcare platforms as a Data Engineer 3+ years of solid experience designing and building Enterprise Data solutions on cloud 1+ years of experience developing solutions hosted within public cloud providers such as Azure or AWS or private cloud/container-based systems using Kubernetes/OpenShift Experience with some of the modern relational databases Experience with Data warehousing services preferably Snowflake Experience in using modern software engineering and product development tools including Agile / SAFE, Continuous Integration, Continuous Delivery, DevOps etc. Solid experience of operating in a quickly changing environment and driving technological innovation to meet business requirements Skilled at optimizing SQL statements Subject matter expert on Cloud technologies preferably Azure and Big Data ecosystem Preferred Qualifications Experience with real-time data streaming and event-driven architectures Experience building Big Data solutions on public cloud (Azure) Experience building data pipelines on Azure with skills Databricks spark, scala, Azure Data factory, Kafka and Kafka Streams, App services, Az Functions Experience developing RESTful Services in .NET, Java or any other language Experience with DevOps in Data engineering Experience with Microservices architecture Exposure to DevOps practices and infrastructure-as-code (e.g., Terraform, Docker) Knowledge of data governance and data lineage tools Ability to establish repeatable processes, best practices and implement version control software in a Cloud team environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About NetApp NetApp is the intelligent data infrastructure company, turning a world of disruption into opportunity for every customer. No matter the data type, workload or environment, we help our customers identify and realize new business possibilities. And it all starts with our people. If this sounds like something you want to be part of, NetApp is the place for you. You can help bring new ideas to life, approaching each challenge with fresh eyes. Of course, you won't be doing it alone. At NetApp, we're all about asking for help when we need it, collaborating with others, and partnering across the organization - and beyond. Job Summary As a Software Engineer you will work as part of a team responsible for actively participating in driving product development and strategy. In addition, you will participate in activities that include testing and debugging of operating systems that run NetApp storage applications. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on enhancements to existing products as well as new product development. Job Requirements Should be able to write code independently. Own features. Should be able to lead in certain areas of the product. Excellent Problem solver, proficient coder and a designer. Should have good experience in Scala, Java. Proficient in Docker, Microservices, knowledge on Saas, AWS is added advantage. Strong in Data Structure and algorithms. Expertise in REST API design and implementation. Should be able to write code independently. Own features. Should be able to lead in certain areas of the product. Education A minimum of 5 years of experience is required. 5 to 8 years of experience is preferred A Bachelor of Science Degree in Electrical Engineering or Computer Science, a Master Degree, or a PhD; or equivalent experience is required At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process. Equal Opportunity Employer: NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification. Why NetApp? We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches. We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life. If you want to help us build knowledge and solve big problems, let's talk.
Posted 1 day ago
10.0 years
0 Lacs
Delhi, India
On-site
Company Size Mid-Sized Experience Required 10 - 15 years Working Days 5 days/week Office Location Delhi Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Preferred Qualifications Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company. Perks, Benefits and Work Culture Testimonial from a designer: 'One of the things I love about the design team at Wingify is the fact that every designer has a style which is unique to them. The second best thing is non-compliance to pre-existing rules for new products. So I just don't follow guidelines, I help create them.' Skills: infrastructure,soc2,ansible,drive,data governance,redshift,gdpr,javascript,cassandra,design,spring boot,jenkins,docker,mongodb,java,tidb,elk,python,php,aws,snowflake,lld,chef,bigquery,gcp,golang,html,data,kafka,grafana,kubernetes,scala,css,hadoop,azure,redis,sql,data processing,spark,hld,node.js,google guice,compliance
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
💼Job Title: Kafka Developer 👨 💻Job Type: Fulltime 📍Location: Pune 💼Work regime: Hybrid 🔥Keywords: Kafka, Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Position Overview: We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence What you will Have:- Responsibilities: Key Responsibilities : Develop Kafka producers to ingest flow records from upstream systems such as flow record exporters (e.g., IPFIX-compatible probes). Build Kafka consumers to stream data into Spark Structured Streaming jobs and downstream data lakes. Define and manage Kafka topic schemas using Avro and Schema Registry for schema evolution. Implement message serialization, transformation, enrichment, and validation logic within the streaming pipeline. Ensure exactly once processing, checkpointing, and fault tolerance in streaming jobs. Integrate with downstream systems such as HDFS or Parquet-based data lakes, ensuring compatibility with ingestion standards. Collaborate with Kafka administrators to align topic configurations, retention policies, and security protocols. Participate in code reviews, unit testing, and performance tuning to ensure high-quality deliverables. Document pipeline architecture, data flow logic, and operational procedures for handover and support. Required Skills & Qualifications : Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes) A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. VOIS India In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VOIS India supports global markets and group functions of Vodafone and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Role Purpose Mode : Hybrid Location : Pune Experience : 5 to 8 years Core Competencies, Knowledge And Experience 5-7 years’ experience in managing large data sets, simulation/ optimization and distributed computing tools. Excellent communication & presentation skills with track record of engaging with business project leads. Role Purpose Primary responsibility is to define data lifecycle, including data models and data sources for analytics platform, gathering data from business and cleaning them in order to provide ready-to-work inputs for Data Scientists Apply strong expertise in in automating end to end data science pipelines & big data pipelines (Collect, ingest, store , transform and optimize scale) The incumbent will work on the assigned projects & it's stakeholder alongside Data Scientists to understand the business challenges faced by them. The work involves working with large data sets, simulation/ optimization and distributed computing tools. The candidate works with the assigned business stakeholder(s) to agree scope, deliverables, process and expected outcomes from the products and services developed. Must Have Technical / Professional Qualifications Experience working with large data sets, simulation/ optimization and distributed computing tools Experience in transformation data with Apache spark for Data Science activities Experience in working with distributed storage on cloud (AWS/GCP) or HDFS Experience in building data pipelines with Airflow Experience in ingesting data from different sources using Kafka/Sqoop/Flume/ Nifi Experience in solving simple to complex big data platform/framework issues Experience in building real time analytics system with Apache Spark, Flink & Kafka Experience in Scala, Python, Java & R Experience in working with NoSQL databases (Cassandra, Mongo DB, HBase, Redis) Key Accountabilities And Decision Ownership Understand the data science problems and design & schedule end to end pipelines For the given problem identify the right big data technologies to solve the problem in an optimized way Automate the data science pipelines, deploy ML algorithms and track the performance Build customer 360, feature store for different machine learning problems Build data model for machine learning feature store on high velocity, flexible schema databases VOIS Equal Opportunity Employer Commitment VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 1 day ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Description Data Scientist Requirements Full-stack AI Engineer Must Have: Programming language – Python, Java/Scala Must have : Experience with data processing libraries like Pandas, NumPy, and Scikit-learn Must have : Proficient in distributed computing platform Apache Spark (PySpark, Scala), Torch etc. Must have: Proficiency in API development with Fast API, Spring boot, Understanding of O&M – logging, monitoring, fault management, security etc Good-to-have : Handson experience with deployment & orchestration tools – Docker, Kubernetes, Helm Good-to-have : Experience with cloud platforms (AWS -Sagemaker/ Bedrock, GCP, or Azure) Good-to-have : Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks (training and deployment) Job responsibilities Full-stack AI Engineer Must Have: Programming language – Python, Java/Scala Must have : Experience with data processing libraries like Pandas, NumPy, and Scikit-learn Must have : Proficient in distributed computing platform Apache Spark (PySpark, Scala), Torch etc. Must have: Proficiency in API development with Fast API, Spring boot, Understanding of O&M – logging, monitoring, fault management, security etc Good-to-have : Handson experience with deployment & orchestration tools – Docker, Kubernetes, Helm Good-to-have : Experience with cloud platforms (AWS -Sagemaker/ Bedrock, GCP, or Azure) Good-to-have : Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks (training and deployment) What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 day ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Description AI/ML Engineer Requirements Full-stack AI Engineer Must Have: Programming language – Python, Java/Scala Must have : Experience with data processing libraries like Pandas, NumPy, and Scikit-learn Must have : Proficient in distributed computing platform Apache Spark (PySpark, Scala), Torch etc. Must have: Proficiency in API development with Fast API, Spring boot, Understanding of O&M – logging, monitoring, fault management, security etc Good-to-have : Handson experience with deployment & orchestration tools – Docker, Kubernetes, Helm Good-to-have : Experience with cloud platforms (AWS -Sagemaker/ Bedrock, GCP, or Azure) Good-to-have : Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks (training and deployment) Job responsibilities Full-stack AI Engineer Must Have: Programming language – Python, Java/Scala Must have : Experience with data processing libraries like Pandas, NumPy, and Scikit-learn Must have : Proficient in distributed computing platform Apache Spark (PySpark, Scala), Torch etc. Must have: Proficiency in API development with Fast API, Spring boot, Understanding of O&M – logging, monitoring, fault management, security etc Good-to-have : Handson experience with deployment & orchestration tools – Docker, Kubernetes, Helm Good-to-have : Experience with cloud platforms (AWS -Sagemaker/ Bedrock, GCP, or Azure) Good-to-have : Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks (training and deployment) What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 day ago
40.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Big Data Service Team @ Oracle Analytics OCI is leading the transformation to cloud-native Big Data technologies in our hyper-scale, multi-tenant cloud, deployed in more than 20 regions worldwide. OCI is committed to providing the best in cloud services that meet the needs of our customers, who are tackling some of the world's biggest challenges. The Big Data Service team’s charter is to offer a managed, cloud-native Big Data Service focused on large-scale data processing and analytics on unstructured data stored in data lakes and managing the data in data lakes. The service work scope encompasses good integration with OCI’s native infrastructure (security, cloud, storage, etc.) and deep integration with other relevant cloud-native services in OCI (like Oracle Kubernetes, Data Catalog, ADW, etc.). It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). We are interested in senior engineers with expertise and passion for solving difficult problems in distributed systems and highly available services. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Desired Skills And Experience Deep understanding of how distributed resilient software is built and deployed. Prior experience in building, or contributing to, distributed data-intensive systems Experience delivering and operating large-scale, highly available distributed systems. Experience with larger projects (large codebases) Experience with open-source software in the Big Data ecosystem Experience at an organization with strong operational/dev-ops culture Expertise in coding in Java or Scala with emphasis on tuning/optimization Good software engineering skills: know how to write clean, testable, and maintainable code write documentation understanding of simple and robust designs, including designing APIs Bonus Deep understanding of Java and JVM mechanics Interested in speaking about their work, internally and externally, or writing articles BS in Computer Science or a related technical field or equivalent practical experience. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Passion for learning and always improving yourself and the team around you Career Level - IC2 Responsibilities What to Expect from the Job Working on distributed data-intensive systems, often as part of open-source communities. Taking ownership of critical parts of the cloud service, including shaping its direction. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Becoming an active member of the Apache open-source community when working on open-source components Optionally: Presenting work at conferences, meetups, or via articles. Working with, and supporting customers/users of the cloud service Design, develop, troubleshoot, and debug software programs for databases, applications, tools, networks, etc. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough