Jobs
Interviews

6093 Scala Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Software Engineer at Carelon Global Solutions India, you will play a crucial role in our team as an AWS and Snowflake Developer. Reporting to the Team Lead, your primary responsibility will involve the development and maintenance of our data infrastructure, ensuring its optimal performance, and supporting data analytics initiatives. Your key responsibilities will include: - Designing, developing, and maintaining scalable data pipelines and ETL processes using AWS services and Snowflake. - Implementing data models, data integration, and data migration solutions. - Working on Scala to Snowpark conversion. - Experience in Cloud migration projects would be advantageous. - Hands-on experience with AWS Services such as Lambda, Step functions, Glue, and S3 buckets. - Certification in Python is a plus. - Knowledge of Job Metadata and ETL Step Metadata Creation, Migration, and Execution. - Expertise in Snowflake. - Familiarity with Elevance Health OS would be a plus. Qualifications: - Full-time IT Engineering or equivalent degree, preferably in Computers. Experience: - Minimum of 2 years as an AWS, Snowflake, Python Developer. Skills and Competencies: - Design, develop, and maintain scalable data pipelines and ETL processes using AWS services and Snowflake. - Manage and optimize Snowflake environments for efficient performance and cost-effectiveness. - Collaborate with data analysts, data scientists, and stakeholders to deliver solutions meeting business needs. - Monitor and optimize the performance of data pipelines and Snowflake queries. - Ensure data security and compliance with regulations and standards. - Proficiency in SQL, Python, or Scala, data modeling, data integration tools, and ETL processes. - Experience with version control systems like Git and CI/CD pipelines. At Carelon, we offer a world of limitless opportunities to our associates, focusing on learning and development, innovation, well-being, rewards, and recognition. Our inclusive culture celebrates diversity and different ways of working. If you require reasonable accommodation due to a disability during the interview process, feel free to request it. Join us as a Software Engineer at Carelon Global Solutions and be part of a team committed to improving lives, simplifying healthcare, and expecting more.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have at least 3 years of hands-on experience in data modeling, ETL processes, developing reporting systems, and data engineering using tools such as ETL, Big Query, SQL, Python, or Alteryx. Additionally, you should possess advanced knowledge in SQL programming and database management. Moreover, you must have a minimum of 3 years of solid experience working with Business Intelligence reporting tools like Power BI, Qlik Sense, Looker, or Tableau, along with a good understanding of data warehousing concepts and best practices. Excellent problem-solving and analytical skills are essential for this role, as well as being detail-oriented with strong communication and collaboration skills. The ability to work both independently and as part of a team is crucial for success in this position. Preferred skills include experience with GCP cloud services such as BigQuery, Cloud Composer, Dataflow, CloudSQL, Looker, Looker ML, Data Studio, and GCP QlikSense. Strong SQL skills and proficiency in various BI/Reporting tools to build self-serve reports, analytic dashboards, and ad-hoc packages leveraging enterprise data warehouses are also desired. Moreover, having at least 1 year of experience in Python and Hive/Spark/Scala/JavaScript is preferred. Additionally, you should have a solid understanding of consuming data models, developing SQL, addressing data quality issues, proposing BI solution architecture, articulating best practices in end-user visualizations, and development delivery experience. Furthermore, it is important to have a good grasp of BI tools, architectures, and visualization solutions, coupled with an inquisitive and proactive approach to learning new tools and techniques. Strong oral, written, and interpersonal communication skills are necessary, and you should be comfortable working in a dynamic environment where problems are not always well-defined.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join as a Big Data Engineer at Barclays and lead the evolution of the digital landscape to drive innovation and excellence. Utilize cutting-edge technology to revolutionize digital offerings and ensure unparalleled customer experiences. To succeed in this role, you should possess the following essential skills: - Full Stack Software Development for large-scale, mission-critical applications. - Proficiency in distributed big data systems like Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks such as Spring Boot, Spring MVC, JPA, Hibernate. - Familiarity with version control systems, particularly Git; GitHub Copilot experience is a bonus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Hands-on experience in developing back-end applications with multi-process and multi-threaded architectures. - Skilled in building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Knowledge of DevOps practices including CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Essential experience in Data Product development and Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. Your primary responsibilities will include: - Developing and delivering high-quality software solutions using industry-aligned programming languages, frameworks, and tools, ensuring scalability, maintainability, and performance optimization. - Collaborating cross-functionally with product managers, designers, and engineers to define software requirements, devise solution strategies, and align with business objectives. - Promoting a culture of code quality and knowledge sharing through participation in code reviews and industry technology communities. - Ensuring secure coding practices to protect data and mitigate vulnerabilities, along with effective unit testing practices for proper code design and reliability. As a Big Data Engineer at Barclays, you will play a crucial role in designing, developing, and enhancing software to provide business, platform, and technology capabilities for customers and colleagues. You will contribute to technical excellence, continuous improvement, and risk mitigation while adhering to Barclays" values of Respect, Integrity, Service, Excellence, and Stewardship, and embodying the Barclays Mindset of Empower, Challenge, and Drive.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a skilled Data Engineer to join our team, working on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, building scalable and efficient data pipelines on Azure. Responsibilities include designing, building, and maintaining scalable ETL/ELT data pipelines using Azure Data Factory, Databricks, and Spark. Developing and optimizing data workflows using SQL and Python or Scala for large-scale data processing and transformation. Implementing performance tuning and optimization strategies for data pipelines and Spark jobs to ensure efficient data handling. Collaborating with data engineers to support feature engineering, model deployment, and end-to-end data engineering workflows. Ensuring data quality and integrity by implementing validation, error-handling, and monitoring mechanisms. Working with structured and unstructured data using technologies such as Delta Lake and Parquet within a Big Data ecosystem. Contributing to MLOps practices, including integrating ML pipelines, managing model versioning, and supporting CI/CD processes. Primary Skills required are Data Engineering & Cloud proficiency in Azure Data Platform (Data Factory, Databricks), strong skills in SQL and either Python or Scala for data manipulation, experience with ETL/ELT pipelines and data transformations, familiarity with Big Data technologies (Spark, Delta Lake, Parquet), expertise in data pipeline optimization and performance tuning, experience in feature engineering and model deployment, strong troubleshooting and problem-solving skills, experience with data quality checks and validation. Nice-to-Have Skills include exposure to NLP, time-series forecasting, and anomaly detection, familiarity with data governance frameworks and compliance practices, basics of AI/ML like ML & MLOps Integration, experience supporting ML pipelines with efficient data workflows, knowledge of MLOps practices (CI/CD, model monitoring, versioning). At Tesco, we are committed to providing the best for our colleagues. Total Rewards offered at Tesco are determined by four principles - simple, fair, competitive, and sustainable. Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays. Tesco promotes programs supporting health and wellness, including insurance for colleagues and their family, mental health support, financial coaching, and physical wellbeing facilities on campus. Tesco in Bengaluru is a multi-disciplinary team serving customers, communities, and the planet. The goal is to create a sustainable competitive advantage for Tesco by standardizing processes, delivering cost savings, enabling agility through technological solutions, and empowering colleagues. Tesco Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India, dedicated to various roles including Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and others.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chandigarh

On-site

You will be joining the Microsoft Security organization, where security is a top priority due to the increasing digital threats, regulatory scrutiny, and complex estate environments. Microsoft Security aims to make the world a safer place by providing end-to-end security solutions to empower users, customers, and developers. As a Senior Data Scientist, you will be instrumental in enhancing our security posture by developing innovative models to detect and predict security threats. This role requires a deep understanding of data science, machine learning, and cybersecurity, along with the ability to analyze large datasets and collaborate with security experts to address emerging threats and vulnerabilities. Your responsibilities will include understanding complex cybersecurity and business problems, translating them into well-defined data science problems, and building scalable solutions. You will develop and deploy production-grade AI/ML systems for real-time threat detection, analyze large datasets to identify security risks, and collaborate with security experts to incorporate domain knowledge into models. Additionally, you will lead the design and implementation of data-driven security solutions, mentor junior data scientists, and communicate findings to stakeholders. To qualify for this role, you should have experience in developing and deploying machine learning models for security applications, preferably in a Big Data or cybersecurity environment. You should be familiar with the Azure tech stack, have knowledge of anomaly detection and fraud detection, and possess expertise in programming languages such as Python, R, or Scala. A Doctorate or Master's Degree in a related field, along with 5+ years of data science experience, is preferred. Strong analytical, problem-solving, and communication skills are essential, as well as proficiency in machine learning frameworks and cybersecurity principles. Preferred qualifications include additional experience in developing machine learning models for security applications, familiarity with data science workloads on the Azure tech stack, and contributions to the field of data science or cybersecurity. Your ability to drive large-scale system designs, think creatively, and translate complex data into actionable insights will be crucial in this role.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

The role of S&C GN AI - Insurance AI Generalist Consultant at Accenture Global Network involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. As a Team Lead/Consultant at Bengaluru, BDC7C location, you will provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. In this position, you will be part of a unified powerhouse that combines the capabilities of Strategy & Consulting with Data and Artificial Intelligence. You will work on architecting, designing, building, deploying, delivering, and monitoring advanced analytics models, including Generative AI, for various client problems. Additionally, you will develop functional aspects of Generative AI pipelines and interface with clients to understand engineering/business problems. The ideal candidate for this role should have 5+ years of experience in data-driven techniques, a Bachelor's/Master's degree in Mathematics, Statistics, Economics, Computer Science, or a related field, and a solid foundation in Statistical Modeling and Machine Learning algorithms. Proficiency in programming languages such as Python, PySpark, SQL, Scala is required, as well as experience implementing AI solutions for the Insurance industry. Strong communication, collaboration, and presentation skills are essential to effectively convey complex data insights and recommendations to clients and stakeholders. Furthermore, hands-on experience with Azure, AWS, or Databricks tools is a plus, and familiarity with GenAI, LLMs, RAG architecture, and Lang chain frameworks is beneficial. This role offers an opportunity to work on innovative projects, career growth, and leadership exposure within Accenture, a global community that continually pushes the boundaries of business capabilities. If you are a motivated individual with strong analytical, problem-solving, and communication skills, and the ability to thrive in a fast-paced, dynamic environment, this role provides an exciting opportunity to contribute to Accenture's future growth and be a part of a vibrant global community.,

Posted 1 week ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title : Software Engineer Job Type : Payroll Experience Required : 1-3 Years Job Description We are seeking a Software Engineer to join our team, focusing on the development of automation tools tailored for GenAI, Big Data, and Electronic Chip Design applications. These tools are being built from the ground up, considering modern compute infrastructure requirements from the start. The ideal candidate will work extensively with advanced programming concepts, algorithms, and cloud deployment on platforms such as AWS, Azure, and GCP. Responsibilities This is an excellent opportunity to tackle complex tasks like performance optimization, memory management, debugging, and scalable implementation, all while gaining hands-on experience with cutting-edge programming and cloud Responsibilities : Design, develop, and maintain software applications using C++, Java, or Scala. Ensure that solutions are scalable, performant, and meet quality standards. Debug, test, and document software applications. Collaborate with cross-functional teams to understand requirements and deliver high-quality solutions. Apply advanced algorithms and data structures to implement innovative solutions to challenging and Skills : : B.Tech/M.Tech in Computer Science or Electronics Engineering. Experience 1-3 years of hands-on programming experience in C++, Java, or Scala. Strong understanding of data structures, algorithms, and design patterns. Experience developing on the Linux platform. Solid understanding of performance optimization and memory management techniques. Knowledge of AI/ML, Big Data, and SQL is a big plus. Highly motivated, with a strong willingness to learn cutting-edge Join Zettabolt? Be part of a team that builds impactful automation tools for modern applications. Collaborate with experienced professionals working on high-end technologies such as cloud platforms, automation frameworks, and GenAI. Enjoy continuous learning opportunities, with rewards for innovation and personal Benefits : Health and life insurance coverage. Sponsorship to attend developer conferences and client visits. Reimbursement for books, courses, and certifications. Training sessions on algorithms, data structures, and emerging technologies like GenAI, ML, AI, and Big : The Interview Process Will Include Assessment of programming skills, focusing on algorithms and data structures. Writing and submission of a working program for a given Skills : Skill Area Experience Proficiency Java (All Versions) 1+ Years Intermediate C++ 1+ Years Intermediate Apache Scala 1+ Years Intermediate Linux 1+ Years Intermediate Data Structures 1+ Years Intermediate Algorithm Development 1+ Years Intermediate (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a member of our team, you will be responsible for developing solutions on a cutting-edge cloud-based platform to manage and analyze large datasets. Your primary tasks will include designing, developing, and deploying digital solutions while ensuring adherence to the software development life cycle within an agile environment. Additionally, you will be tasked with creating technical documentation and translating business requirements into technical functionalities. In this role, you will be expected to write clean and efficient code based on business requirements and specifications. You will also be responsible for creating Notebooks, pipelines, and workflows in SCALA or Python to ingest, process, and serve data on our platform. Furthermore, you will play a key role in the continuous improvement of Nordex development processes by actively participating in retrospectives and suggesting optimizations. To qualify for this position, you should hold a technical degree in computer science, data analytics, electrical engineering, automation technology, or a related field. Additionally, experience or certification in Databricks, Azure Data Lakes, and SQL Data Warehousing is highly desirable. Knowledge of security protocols and devices such as stateful, firewall, IPS, VPN, IPSec, TLS, and L2-4 Security will be beneficial in fulfilling the requirements of this role. Ideally, you should have 2-3 years of experience in a relevant job profile. If you are looking to leverage your technical expertise and contribute to the development of innovative solutions in a dynamic environment, we encourage you to apply for this exciting opportunity at Nordex.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Scientist at KPMG in India, you will collaborate with business stakeholders and cross-functional subject matter experts to gain a deep understanding of the business context and key questions. You will be responsible for creating Proof of Concepts (POCs) and Minimum Viable Products (MVPs), and guiding them through production deployment and operationalization. Your role will involve influencing machine learning strategy for digital programs and projects, while making solution recommendations that balance speed to market and analytical soundness. In this position, you will explore design options to assess efficiency and impact, develop approaches to enhance robustness and rigor, and develop analytical and modeling solutions using various tools such as Python, R, and TensorFlow. You will be tasked with formulating model-based solutions by integrating machine learning algorithms with other techniques like simulations, as well as designing, adapting, and visualizing solutions based on evolving requirements. Your responsibilities will also include creating algorithms to extract information from large datasets, deploying algorithms to production to identify actionable insights, and comparing results from different methodologies to recommend optimal techniques. Moreover, you will work on multiple facets of AI including cognitive engineering, conversational bots, and data science, ensuring that solutions demonstrate high levels of performance, security, scalability, and maintainability upon deployment. As a part of your role, you will lead discussions at peer reviews and utilize interpersonal skills to positively influence decision-making processes. You will provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts, making significant contributions to internal discussions on emerging practices. Additionally, you will facilitate the sharing of new ideas, learnings, and best practices across different geographies. To qualify for this position, you must hold a Bachelor of Science or Bachelor of Engineering degree at a minimum, along with 2-4 years of work experience as a Data Scientist. You should possess a blend of business focus, strong analytical and problem-solving skills, and programming knowledge. Proficiency in statistical concepts, ML algorithms, statistical/programming software (e.g., R, Python), data querying languages (e.g., SQL, Hadoop/Hive, Scala), and experience with data management tools like Microsoft Azure or AWS are essential requirements. Moreover, you should have hands-on experience in feature engineering, hyperparameter optimization, and producing high-quality code, tests, and documentation. Familiarity with Agile principles and processes, and the ability to lead, manage, and deliver business results through data scientists or professional services teams are also crucial. Excellent communication skills, self-motivation, proactive problem-solving abilities, and the capability to work both independently and in teams are vital for this role. While a Bachelor's or Master's degree in technology-related fields is preferred, possessing relevant experience in Software Engineering or Data Science is mandatory. Additionally, familiarity with AI frameworks, deep learning, computer vision, and cloud services along with proficiency in Python, SQL, Docker, and versioning tools are highly desirable. The ideal candidate will also have experience with Agent Framework, RAG Framework, AI algorithms, and other relevant technologies as mentioned in the job description.,

Posted 1 week ago

Apply

14.0 - 18.0 years

0 Lacs

karnataka

On-site

You will be responsible for coding, unit testing, and building high-performance and scalable applications that cater to the needs of millions of Walmart-International customers in the areas of supply chain management and customer experience. Your role will involve collaborating with Walmart International, which operates over 5,900 retail units in 26 countries, including Africa, Argentina, Canada, China, India, Japan, and Mexico, among others. Championing design thinking principles to ensure solutions meet user needs and business objectives will be a key part of your responsibilities. You will lead the design, development, and implementation of complex, distributed enterprise applications using Java, Python, and ReactJS. Additionally, you will be involved in creating and maintaining technical architecture, aligning it with business goals, and ensuring scalability requirements are met. As part of your role, you will architect complex software systems, design and develop robust backend and UI components, and adhere to best practices and coding standards. Your focus will be on producing high-quality software through unit testing, code reviews, and continuous integration. Furthermore, you will develop comprehensive technical documentation and presentations to communicate architectural decisions and design options effectively. To excel in this position, you should possess a B.Tech./B.E./M.Tech./M.S. in Computer Science or a relevant discipline, with at least 14 years of experience in designing and developing highly scalable applications and platform development. Strong computer science fundamentals, hands-on experience with technologies like Scala, Java, Springboot, Microservices, NodeJs, React JS, Angular Js, and knowledge of web services, databases, messaging systems, and cloud platforms are essential. Your role will also involve promoting and enforcing technical standards, driving engineering excellence, and fostering a culture of learning and innovation within the team. Strong communication and interpersonal skills, agility in adapting to change, and a practitioner of Agile (Scrum) methodology will be valuable assets in this role. Joining Walmart Global Tech means working in an environment where your contributions can significantly impact the lives of millions of people. You will be part of a team at the forefront of retail disruption, empowered by technology and driven by innovation. Walmart Global Tech provides opportunities for personal and professional growth, offering a hybrid work model, competitive compensation, and a range of benefits to support your well-being. At Walmart, we are committed to creating an inclusive workplace where every associate feels valued and respected. We believe in fostering a culture of belonging, where diversity is celebrated, and all individuals have equal opportunities for growth and success. As an Equal Opportunity Employer, Walmart is dedicated to understanding, respecting, and valuing the unique perspectives and experiences of all its associates, customers, and suppliers.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

kochi, kerala

On-site

As a Data Scientist at our company, you will be responsible for delivering end-to-end projects in the Analytics space. You must possess the skills in Big Data, Python or R to drive business results through data-based insights. Your role will involve working with various stakeholders and functional teams, discovering solutions in large data sets, and improving business outcomes. Identifying valuable data sources and collection processes will be a key part of your responsibilities. You will oversee the preprocessing of both structured and unstructured data, analyze large volumes of information to identify trends and patterns specific to the insurance industry. Building predictive models, machine-learning algorithms, and ensemble modeling will be essential tasks. You will also be required to present information using data visualization techniques and collaborate with engineering and product development teams. To excel in this role, you should have 3.5-5 years of experience in Analytics systems/program delivery, with a minimum of 2 project implementations in Big Data or Advanced Analytics. Proficiency in statistical computer languages such as R, Python, SQL, and Pyspark is necessary. Familiarity with Scala, Java, or C++ and knowledge of various machine learning techniques and advanced statistical concepts are also required. Hands-on experience with Azure/AWS analytics platforms, Databricks or similar analytical applications, business intelligence tools like Tableau, and data frameworks such as Hadoop is crucial. Strong mathematical skills, excellent communication, and presentation abilities are essential for this role. In addition to technical skills, you should have multi-industry domain experience, expertise in Python, Scala, SQL, and knowledge of Tableau/Power BI or similar self-service visualization tools. Your interpersonal and team skills should be exemplary, and any past leadership experience would be advantageous. Join our team at Accenture and leverage your expertise to drive impactful business outcomes through data-driven insights. Experience: 3.5 - 5 years of relevant experience required Educational Qualification: Graduation,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Data Architect with over 8 years of experience, you will be responsible for designing and implementing scalable, high-performance data solutions. Your expertise in Databricks, Azure, AWS, and modern data technologies will be key in developing data lakes, data warehouses, and real-time streaming architectures. You will work on optimizing data pipelines, Delta Lake architectures, and advanced analytics solutions using Databricks. Additionally, you will be involved in developing and managing cloud-native data platforms on Azure and AWS, designing ETL/ELT pipelines, and working with big data processing tools like Apache Spark, PySpark, Scala, Hadoop, and Kafka. Your role will also include implementing data governance and security measures, performance optimization, and collaborating with engineering, analytics, and business teams to align data strategies with business goals. To excel in this position, you must have hands-on experience with Databricks, strong expertise in Azure and AWS data services, proficiency in SQL, Python, and Scala, experience with NoSQL databases and real-time data streaming, as well as knowledge of data governance best practices and CI/CD for data pipelines. Overall, your role as a Data Architect will require a combination of technical skills, problem-solving abilities, and effective communication to drive successful data solutions within the organization.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior Software Engineer-I at Sumo Logic, based in Bengaluru or Noida, you will be part of the Open Source/Open Telemetry Collector team. Your primary focus will be on designing and implementing features for a robust and efficient OpenTelemetry collection engine. This engine simplifies and enhances the performance and behavior monitoring of intricate distributed systems, enabling our customers to derive meaningful insights from their data effortlessly. Your responsibilities will include writing high-quality code with a strong emphasis on unit and integration testing. You will contribute to the upstream OpenTelemetry project, analyzing and enhancing the efficiency, scalability, and reliability of our backend systems. Additionally, you will have the opportunity to work collaboratively with team members to address business needs effectively and efficiently. To excel in this role, you should ideally hold a B.Tech, M.Tech, or Ph.D. in Computer Science or a related discipline, coupled with 5-8 years of industry experience demonstrating ownership and accountability. Proficiency in GoLang or other statically typed languages like Java, Scala, or C++ is preferred, with a willingness to learn GoLang if not already experienced. Strong communication skills, the ability to work well in a team-oriented environment, and a knack for quickly learning and adapting to new technologies are crucial for success. It would be advantageous if you have experience contributing to open-source projects, particularly in the telemetry collection domain. Familiarity with monitoring/observability tools, GitHub Actions or other CI pipelines, multi-threaded programming, and distributed systems is highly desirable. Moreover, comfort with Unix-type operating systems such as Linux and exposure to Docker, Kubernetes, Helm, and Terraform will be beneficial. Agile software development experience, including test-driven development and iterative practices, will also be valued. Join Sumo Logic, Inc., a company dedicated to empowering modern, digital businesses by providing a reliable and secure cloud-native application platform. As a Senior Software Engineer-I, you will play a pivotal role in delivering real-time analytics and insights across observability and security solutions, ensuring the success of cloud-native applications worldwide. For more information about Sumo Logic, visit www.sumologic.com. As an employee, you will be expected to adhere to federal privacy laws, regulations, and organizational data protection policies.,

Posted 1 week ago

Apply

5.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Architect at Crisil, a data-centric organization, you will be responsible for designing and building scalable, secure, and high-performance solutions capable of handling large data sets. You will collaborate with cross-functional teams to identify and prioritize technical requirements and develop solutions that meet those needs. Your key responsibilities will include designing and building solutions that can handle large data sets, working closely with cross-functional teams to prioritize technical requirements, reviewing and fine-tuning existing applications for optimal performance, scalability, and security, collaborating with the central architecture team to establish best coding practices, developing and maintaining technical roadmaps aligned with business objectives, evaluating and recommending new technologies for system improvement, leading large-scale projects from design to implementation, mentoring junior staff, and staying updated on emerging trends and technologies. To qualify for this role, you should have a Bachelor's degree in computer science, Engineering, or a related field, along with 10-15 years of experience in Software development, including at least 5 years in technical architecture or a related field. You should possess strong technical skills such as an understanding of software architecture principles, proficiency in Java programming and related technologies, knowledge of other programming languages, experience with cloud-based technologies, and familiarity with containerization using Docker and Kubernetes. In addition to technical skills, you should also demonstrate leadership qualities such as experience in leading technical teams and projects, excellent communication and collaboration skills, and the ability to influence and negotiate with stakeholders. Soft skills like problem-solving, attention to detail, adaptability, and effective written and verbal communication are also essential for this role. Nice to have qualifications include a Master's degree in a related field, knowledge of data analytics and visualization tools, experience with machine learning or artificial intelligence, certification in Java development or technical architecture like TOGAF, and familiarity with agile development methodologies. In return, Crisil offers a competitive salary and benefits package, opportunities for professional growth, a collaborative work environment, flexible working hours, access to cutting-edge technologies, and recognition for outstanding performance.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

The non-cloud QA Test Engineer position at our client, a leading technology services and consulting company, offers an exciting opportunity to work on building innovative solutions that cater to clients" complex digital transformation needs. With a holistic portfolio of consulting, design, engineering, and operations capabilities, we empower clients to achieve their boldest ambitions and develop future-ready, sustainable businesses. Our global presence with over 230,000 employees and business partners across 65 countries ensures that we deliver on our commitment to helping clients, colleagues, and communities thrive in a dynamic world. As a Non-cloud QA Test Engineer, you will be responsible for ensuring the quality and reliability of our solutions. The key skills required for this role include proficiency in SQL, ETL testing, and proven experience in developing solutions using at least one programming language. Data testing tasks such as ETL testing, data validation, transformation checks, and SQL-based testing will be an essential part of your responsibilities. Hands-on experience in Java or Python is crucial for this role. While the above-mentioned skills are a must-have, it would be advantageous if you have automation experience in the backend, along with scripting knowledge in Python, Java, Scala, or Pyspark. Familiarity with Databricks and Azure will be considered a plus. This position is based in Pune and offers a hybrid work mode under a contract employment type. The ideal candidate should have a minimum of 6 years of experience in the field. The notice period for this role ranges from immediate to 15 days. If you are interested and meet the requirements outlined above, please share your resume with barkavi@people-prime.com. Join us in our mission to drive digital transformation and innovation for our clients while creating a sustainable future for businesses worldwide.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly skilled and motivated Python, AWS, Big Data Engineer to join our data engineering team. The ideal candidate should have hands-on experience with the Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. Your responsibilities will include designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives. Virtusa is a company that values teamwork, quality of life, and professional and personal development. We are proud to have a team of 27,000 people globally who care about your growth and seek to provide you with exciting projects, opportunities, and work with state-of-the-art technologies throughout your career with us. At Virtusa, we believe in the potential of great minds coming together. We emphasize collaboration and a team environment, providing a dynamic place for talented individuals to nurture new ideas and strive for excellence.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a ML Engineer at DSP specializing in Predictive Maintenance at Vanderlande in Veghel, you will be part of the Digital Service Platform team contributing to the development of data-driven services and solutions. Vanderlande, a leading provider of baggage handling systems for airports and parcel handling systems, deals with massive amounts of data on a daily basis. Your role will involve collaborating with a diverse team to optimize customer asset usage and maintenance, thereby impacting performance, cost, and sustainability KPIs by extending component lifetimes. In this position, you will work closely with Data Scientists, Machine Learning Engineers, Data Engineers, and other professionals to design and implement data and machine learning pipelines for Predictive Maintenance solutions. Your responsibilities will include developing, testing, and documenting data pipelines, building scalable pipelines for machine learning models, integrating machine learning models into production pipelines, and continuously improving data and ML services for customer sites. To qualify for this role, you should have a minimum of 4 years" experience in building complex data pipelines and integrating machine learning solutions, along with a Bachelor's or Master's degree in Computer Science, IT, Data Science, or a related field. Proficiency in Java, Scala, and Python, as well as experience with stream processing frameworks like Spark and cloud platforms like Azure and Databricks, are required. Additionally, familiarity with MLOps practices, ML frameworks such as TensorFlow and PyTorch, and data quality management is essential. Working at Vanderlande will provide you with the opportunity to be at the forefront of data-driven innovation in a global organization. You will collaborate with a talented and diverse team to design and implement cutting-edge solutions, expanding your expertise in data engineering and machine learning in an industrial setting. If you are passionate about leveraging data and machine learning to drive innovation, we look forward to receiving your application.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Scala, PySpark Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to guarantee the quality of the applications you create, while continuously seeking ways to enhance functionality and user experience. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Conduct thorough testing and debugging of applications to ensure optimal performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with PySpark, Scala. - Strong understanding of data integration and ETL processes. - Familiarity with cloud computing concepts and services. - Experience in application lifecycle management and agile methodologies. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with cross-functional teams to gather requirements, developing application features, and ensuring that the applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality solutions that meet the needs of the organization and its stakeholders. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in continuous learning to stay updated with the latest technologies and best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Microsoft Azure Databricks, Microsoft Azure Data Services. - Strong understanding of data integration techniques and ETL processes. - Experience with cloud-based data storage solutions and data management. - Familiarity with programming languages such as Python or Scala. - Ability to work with data visualization tools to present insights effectively. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required., 15 years full time education

Posted 1 week ago

Apply

4.0 years

0 Lacs

Greater Chennai Area

On-site

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Workday Prism Analytics is a self-service analytics solution for Finance and Human Resources teams that allows companies to bring external data into Workday, combine it with existing people or financial data, and present it via Workday’s reporting framework. This gives the end user a comprehensive collection of insights that can be carried out in a flash. We design, build and maintain the data warehousing systems that underpin our Analytics products. We straddle both applications and systems, the ideal candidate for this role is someone who has a passion for solving hyper scale engineering challenges to serve the largest companies on the planet. About The Role As part of Workday’s Prism Analytics team, you will be responsible for the integration of our Big Data Analytics stack with Workday's cloud infrastructure. You will work on building, improving and extending large-scale distributed data processing frameworks like Spark, Hadoop, and YARN in a multi-tenanted cloud environment. You will also be responsible for developing techniques for cluster management, high availability and disaster recovery of the Analytics Platform, Hadoop and Spark services. You will also engineer smart tools and frameworks that provide easy monitoring, troubleshooting, and manageability of our cloud-based analytic services. About You You are an engineer who is passionate about developing distributed applications in a multi-tenanted cloud environment. You take pride in developing distributed systems techniques to coordinate application services, ensuring the application remains highly available and working on disaster recovery for the applications in the cloud. You think not only about what is valuable for the development of right abstractions and modules but also about programmatic interfaces to enable customer success. You also excel in the ability to balance priorities and make the right tradeoffs in feature content and timely delivery of features while ensuring customer success and technology leadership for the company. You can make all of this happen using Java, Spark, and related Hadoop technologies. Basic Qualifications At least 4+ years of software development experience (using Java, Scala or other languages) with deep Linux/Unix expertise. Other Qualifications Experience in building Highly Available, Scalable, Reliable multi-tenanted big data applications on Cloud (AWS, GCP) and/or Data Center architectures. Working knowledge of distributed system principles. Understanding of big data frameworks like Spark and/or Hadoop. Understanding of resource management using YARN, Kubernetes, etc. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! ,

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Chennai Area

On-site

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Workday Prism Analytics is a self-service analytics solution for Finance and Human Resources teams that allows companies to bring external data into Workday, combine it with existing people or financial data, and present it via Workday’s reporting framework. This gives the end user a comprehensive collection of insights that can be carried out in a flash. We design, build and maintain the data warehousing systems that underpin our Analytics products. We straddle both applications and systems, the ideal candidate for this role is someone who has a passion for solving hyper scale engineering challenges to serve the largest companies on the planet. About The Role As part of Workday’s Prism Analytics team, you will be responsible for the integration of our Big Data Analytics stack with Workday's cloud infrastructure. You will work on building, improving and extending large-scale distributed data processing frameworks like Spark, Hadoop, and YARN in a multi-tenanted cloud environment. You will also be responsible for developing techniques for cluster management, high availability and disaster recovery of the Analytics Platform, Hadoop and Spark services. You will also engineer smart tools and frameworks that provide easy monitoring, troubleshooting, and manageability of our cloud-based analytic services. About You You are an engineer who is passionate about developing distributed applications in a multi-tenanted cloud environment. You take pride in developing distributed systems techniques to coordinate application services, ensuring the application remains highly available and working on disaster recovery for the applications in the cloud. You think not only about what is valuable for the development of right abstractions and modules but also about programmatic interfaces to enable customer success. You also excel in the ability to balance priorities and make the right tradeoffs in feature content and timely delivery of features while ensuring customer success and technology leadership for the company. You can make all of this happen using Java, Spark, and related Hadoop technologies. Basic Qualifications 7+ years of software engineering experience. At least 5+ years of software development experience (using Java, Scala or other languages) with deep Linux/Unix expertise. Other Qualifications Experience in building Highly Available, Scalable, Reliable multi-tenanted big data applications on Cloud (AWS, GCP) and/or Data Center architectures. Working knowledge of distributed system principles. Experience with managing big data frameworks like Spark and/or Hadoop. Understanding of resource management using YARN, Kubernetes, etc. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! ,

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Duties for this role include but not limited to: supporting the design, build, test and maintain data pipelines at big data scale. Assists with updating data from multiple data sources. Work on batch processing of collected data and match its format to the stored data, make sure that the data is ready to be processed and analyzed. Assisting with keeping the ecosystem and the pipeline optimized and efficient, troubleshooting standard performance, data related problems and provide L3 support. Implementing parsers, validators, transformers and correlators to reformat, update and enhance the data. Data Engineers play a pivotal role within Dataworks, focused on creating and driving engineering innovation and facilitating the delivery of key business initiatives. Acting as a “universal translator” between IT, business, software engineers and data scientists, data engineers collaborate across multi-disciplinary teams to deliver value. Data Engineers will work on those aspects of the Dataworks platform that govern the ingestion, transformation, and pipelining of data assets, both to end users within FedEx and into data products and services that may be externally facing. Day-to-day, they will be deeply involved in code reviews and large-scale deployments. Essential Job Duties & Responsibilities Understanding in depth both the business and technical problems Dataworks aims to solve Building tools, platforms and pipelines to enable teams to clearly and cleanly analyze data, build models and drive decisions Scaling up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique Collaborating across teams to drive the generation of data driven operational insights that translate to high value optimized solutions. Delivering tangible value very rapidly, collaborating with diverse teams of varying backgrounds and disciplines Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases Interacting with senior technologists from the broader enterprise and outside of FedEx (partner ecosystems and customers) to create synergies and ensure smooth deployments to downstream operational systems Skill/Knowledge Considered a Plus Technical background in computer science, software engineering, database systems, distributed systems Fluency with distributed and cloud environments and a deep understanding of optimizing computational considerations with theoretical properties Experience in building robust cloud-based data engineering and curation solutions to create data products useful for numerous applications Detailed knowledge of the Microsoft Azure tooling for large-scale data engineering efforts and deployments is highly preferred. Experience with any combination of the following azure tools: Azure Databricks, Azure Data Factory, Azure SQL D, Azure Synapse Analytics Developing and operationalizing capabilities and solutions including under near real-time high-volume streaming conditions. Hands-on development skills with the ability to work at the code level and help debug hard to resolve issues. A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value Direct experience having built and deployed robust, complex production systems that implement modern, data processing methods at scale Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle, and to work through problems as they are still being defined Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews Ability to conduct data analysis, investigation, and lineage studies to document and enhance data quality and access Use of agile and devops practices for project and software management including continuous integration and continuous delivery Demonstrated expertise working with some of the following common languages and tools: Spark (Scala and PySpark), Kafka and other high-volume data tools SQL and NoSQL storage tools, such as MySQL, Postgres, MongoDB/CosmosDB Java, Python data tools Azure DevOps experience to track work, develop using git-integrated version control patterns, and build and utilize CI/CD pipelines Working knowledge and experience implementing data architecture patterns to support varying business needs Experience with different data types (json, xml, parquet, avro, unstructured) for both batch and streaming ingestions Use of Azure Kubernetes Services, Eventhubs, or other related technologies to implement streaming ingestions Experience developing and implementing alerting and monitoring frameworks Working knowledge of Infrastructure as Code (IaC) through Terraform to create and deploy resources Implementation experience across different data stores, messaging systems, and data processing engines Data integration through APIs and/or REST service PowerPlatform (PowerBI, PowerApp, PowerAutomate) development experience a plus Additional Job Description Analytical Skills, Accuracy & Attention to Detail, Planning & Organizing Skills, Influencing & Persuasion Skills, Presentation Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Training-related experience Must have Teaching experience: conducting training sessions in classroom and dynamically responding to different capabilities of learners; experience in analyzing the feedback from sessions and identifying action areas for self-improvement Developing teaching material: Experience in developing teaching material, including exercises and assignments Good presentation skills, excellent oral / written communication skills Nice to have Teaching experience: Experience in delivering session over virtual classrooms Instructional Design: Developing engaging content Designing Assessments: Experience in designing assessments to evaluate the effectiveness of training and gauging the proficiency of the learner Participated in activities of the software development lifecycle like development, testing, configuration management Job Responsibilities Develop teaching materials including exercises & assignments Conduct classroom training / virtual training Design assessments Enhance course material & course delivery based on feedback to improve training effectiveness Location: Mysore, Mangalore, Bangalore, Chennai, Pune, Hyderabad, Chandigarh Description of the Profile We are looking for trainers with 2 to 4 years of teaching experience and technology know-how in one or more of the following areas: Java – Java programming, Spring, Angular / React, Bootstrap Microsoft – C# programming, SQL Server, ADO.NET, ASP.NET, MVC design pattern, Azure, MS Power platforms, MS Dynamics 365 CRM, MS Dynamics 365 ERP, SharePoint Testing – Selenium, Microfocus - UFT, Microfocus-ALM tools, SOA testing, SOAPUI, Rest assured, Appium Big Data – Python programming, Hadoop, Spark, Scala, Mongo DB, NoSQL SAP – SAP ABAP programming / SAP MM / SAP SD /SAP BI / SAP S4 HANA Oracle – Oracle E-Business Suite (EBS) / PeopleSoft / Siebel CRM / Oracle Cloud / OBIEE / Fusion Middleware API and integration – API, Microservices, TIBCO, APIGee, Mule Digital Commerce – SalesForce, Adobe Experience Manager Digital Process Automation - PEGA, Appian, Camunda, Unqork, UIPath MEAN / MERN stacks Business Intelligence – SQL Server, ETL using SQL Server, Analysis using SQL Server, Enterprise reporting using SQL, Visualization Data Science – Python for data science, Machine learning, Exploratory data analysis, Statistics & Probability Cloud & Infrastructure Management – Network administration / Database administration / Windows administration / Linux administration / Middleware administration / End User Computing / ServiceNow Cloud platforms like AWS / GCP/ Azure / Oracle Cloud, Virtualization Cybersecurity - Infra Security / Identity & Access Management / Application Security / Governance & Risk Compliance / Network Security Mainframe – COBOL, DB2, CICS, JCL Open source – Python, PHP, Unix / Linux, MySQL, Apache, HTML5, CSS3, JavaScript DBMS – Oracle / SQL Server / MySQL / DB2 / NoSQL Design patterns, Agile, DevOps

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description As a Data Engineer, you will leverage your technical expertise in data, analytics, cloud technologies, and analytic software tools to identify best designs, improve business processes, and generate measurable business outcomes. You will work with Data Engineering teams from within D&A, across the Pro Tech portfolio and additional Ford organizations such as GDI&A (Global Data Insight & Analytics), Enterprise Connectivity, Ford Customer Service Division, Ford Credit, etc. Develop EL/ELT/ETL pipelines to make data available in BigQuery analytical data store from disparate batch, streaming data sources for the Business Intelligence and Analytics teams. Work with on-prem data sources (Hadoop, SQL Server), understand the data model, business rules behind the data and build data pipelines (with GCP, Informatica) for one or more Ford Pro verticals. This data will be landed in GCP BigQuery. Build cloud-native services and APIs to support and expose data-driven solutions. Partner closely with our data scientists to ensure the right data is made available in a timely manner to deliver compelling and insightful solutions. Design, build and launch shared data services to be leveraged by the internal and external partner developer community. Building out scalable data pipelines and choosing the right tools for the right job. Manage, optimize and Monitor data pipelines. Provide extensive technical, strategic advice and guidance to key stakeholders around data transformation efforts. Understand how data is useful to the enterprise. Responsibilities Bachelors Degree 3+ years of experience with SQL and Python 2+ years of experience with GCP or AWS cloud services; Strong candidates with 5+ years in a traditional data warehouse environment (ETL pipelines with Informatica) will be considered 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner. Comfortable with a broad array of relational and non-relational databases. Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse) Qualifications Experience with GCP cloud services including BigQuery, Cloud Composer, Dataflow, CloudSQL, GCS, Cloud Functions and Pub/Sub. Inquisitive, proactive, and interested in learning new tools and techniques. Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka. 1+ year experience with Hive, Spark, Scala, JavaScript. Strong oral, written and interpersonal communication skills Comfortable working in a dynamic environment where problems are not always well-defined. M.S. in a science-based program and/or quantitative discipline with a technical emphasis.

Posted 1 week ago

Apply

8.0 - 9.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below job opportunity is one of Our clients which is a leading Digital solution company for Business IT solutions. Position: Sr. Software Engineer - AWS Data Location: Bangalore Duration: Full Time Work Type: Onsite Job Description: Primary Responsibilities Responsible for designing, developing, testing and supporting data pipelines and applications Industrialize data feeds Experience in working with cloud environments AWS Creates data pipelines into existing systems Experience with enforcing security controls and best practices to protect sensitive data within AWS data pipelines, including encryption, access controls, and auditing mechanisms. Improves data cleansing and facilitates connectivity of data and applied technologies between both external and internal data sources. Establishes a continuous quality improvement process and to systematically optimizes data quality Translates data requirements from data users to ingestion activities B.Tech/ B.Sc./M.Sc. in Computer Science or related field and 3+ years of relevant industry experience Interest in solving challenging technical problems Nice to have test driven development and CI/CD workflows Knowledge of version control software such as Git and experience in working with major hosting services (e. g. Azure DevOps, Github, Bitbucket, Gitlab) Nice to have in working with cloud environments such as AWSe especially creating serverless architectures and using infrastructure as code facilities such as CloudFormation/CDK, Terraform, ARM. Hands-on experience in working with various frontend and backend languages (e.g., Python, R, Java, Scala, C/C++, Rust, Typescript, ...) Technical Skills Required 8+ years of experience in C# and .NET Core / .NET 6+ ,Graph QL Deep expertise in Entity Framework / EF Core (Code First / Database First approaches, Migrations, LINQ, and Query Optimization) Strong knowledge of SQL Server (stored procedures, indexing, query tuning) Experience in building and consuming RESTful APIs Familiarity with Azure PaaS services (App Services, Azure SQL, Key Vault, Azure Functions) is a plus Knowledge of design patterns, SOLID principles, and clean code practices Experience with unit testing frameworks like MSTest, xUnit, or NUnit Proficient in Git / Azure DevOps for source control and release management TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies