Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 7.0 years
8 - 9 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 20 hours ago
6.0 - 7.0 years
8 - 9 Lacs
Pune
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences.
Posted 20 hours ago
8.0 - 13.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Req ID: 317928 We are currently seeking a Java Backend Developer - Digital Engineering Lead Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). Results-Based Lead Software Engineer (Product Development) Other Comparable titles SaaS Backend Developer Overview As a Backend Developer/ Architect, you need to participate in estimating, technical design, implementation, documentation, testing, deployment and support of applications developed for our clients. As a member working in a team environment, you will work with solution architects to interpret/translate written business requirements into technical design/code. Scope Core responsibilities to include building backend Rest API Services based on Spring Boot deployed in a SaaS environment The dev team currently comprises of 10+ global associates across US and India (COE) Our current technical environment Software Spring Boot Microservices, Building Portal component, Azure SQL, Spock groovy Application Architecture Service deployed on Azure Frameworks/Others KAFKA , GitHub, CI/CD, Java, J2EE, Docker, Kubernetes Experience on SaaS What you"™ll do Development of REST API in a Microservices architecture (Spring Boot) and deployed on Microsoft"™s Azure platform. The architecture includes technology components such as ReactJS and JavaScript/Typescript (UI), Spring Boot (Backend), Azure SQL, Azure Blob, Azure Logic Apps, Portal and Supply Chain planning software Be a senior member of a highly skilled team seeking systematic approaches to improve engineering productivity, efficiency, effectiveness, and quality Support our existing customer base with the newer enhancements/ defects fixing Create technical documentation Provide early visibility and mitigation to technical challenges through the journey. Confidently represents product and portfolio What we are looking for Bachelor"™s degree (STEM preferred) and minimum 8+ years of experience in Software development; ideally a candidate that has started as a Software Engineer and progressed to Lead Software Engineer Strong experience in programming and problem solving Hands-on development skills along with design experience; should not have moved away from software development Experience in building products with an API first approach in a SaaS environment Required Skills Java, Spring Boot, SQL Preferred Skills Knowledge of Public Clouds (Azure, AWS etc.), Spring Cloud, Docker, Kubernetes Experience in Supply Chain domain is a plus Good Understanding of secure architectures, secure configuration, identity management, role-based access control, authentication & authorization, and data encryption.
Posted 6 days ago
6.0 - 7.0 years
5 - 9 Lacs
Navi Mumbai
Work from Office
Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 week ago
6.0 - 7.0 years
14 - 18 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 1 week ago
3.0 - 6.0 years
12 - 20 Lacs
Pune
Hybrid
Required Data Engineer for our client company Product base IT Company Job location Pune Salary upto 20 LPA Exp. 3 to 6 years Need immediate joiner only work mode hybrid apply here Required Candidate profile Required Candidate profile candidate should be good in Communication need immediate joiners only (within 15 Days) Excellent communication skill must
Posted 2 weeks ago
4 - 6 years
4 - 6 Lacs
Bengaluru
Work from Office
Experienced developer proficient in Python, API development (Flask), GraphQL, Apache Airflow, Databricks, Docker, Azure Blob, and databases (MongoDB/PostgreSQL). Skilled in building scalable, efficient solutions for diverse technical challenges.
Posted 2 months ago
2 - 5 years
5 - 9 Lacs
Bengaluru
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 2 months ago
5 - 8 years
4 - 8 Lacs
Bengaluru
Work from Office
Description Architecting, Implementation and Hands on experience in MS Dynamics D365 Commerce. Core development capabilities in X++, Visual studio, SSRS Reports, Business - Operational framework and Data entities. Minimum 5 years of experience in creating new objects and customizing standard objects using COC and event handler to extend the standard AOT objects in D365 environment. Experience in integration with 3rd party systems, Workflows in Dynamics 365 Commerce. Role-based security design and customization Good understanding of LCS (Life Cycle Services) and Azure DevOps. Experience in leveraging Microsoft Sure Step methodology for full-lifecycle implementations Good Knowledge in OOPS concepts and SQL Server DB Knowledge in PowerApps, LogicApps, OData Service, Azure Blob and other Azure Services will be added advantage. Responsible for the technical support in D365 Commerce and as well as integrated third-party application interfaces. Provide in-depth support for users remotely, via written correspondence and using Service Now support platform, to resolve Dynamics 365 Commerce complex or escalated problems. Required to prepare technical specification documents for customizations and bug fixes. Ensure technical support documents are complete and fixes are delivered successfully as per the plan. Involvement in the Release / upgrade/ implementation cycle as needed. On-going monitoring and investigation of D365 Commerce Responsible for designing and building new custom functionality in MS Dynamics 365 Commerce using X++ programming language. Able to perform D365 application performance tuning using D365 and SQL tools Ability to work collaboratively as part of an existing delivery team Conduct analysis to triage the assigned issues and escalate as required Provide research, advice and opinion regarding ERP issues and associated risks Able to design/develop extensions to D365 Retail, eCommerce Suite Knowledge of Power Platform, Power BI, and related Microsoft technologies is a plus Diagnose and resolve Dynamics problems and provide information to educate the users on resolutions in a prompt and professional manner Aim to align response times with team SLA's delivering a consistently high quality of service Engage with appropriate internal and external resources to resolve issues and keep users updated on progress toward resolution Monitor, follow-up and maintain the D365 Commerce support queue. Ensure that support issues are assigned, and resolutions are documented Regularly review support log to track performance, trends, anomalies, and opportunities for improvement Relevant Microsoft Dynamics CertificationsF O Developer (MB-500) Dynamics 365Finance and Operations Apps Developer Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills Microsoft Dynamics 365 Commerce Languages RequiredENGLISH Role Rarity To Be Defined
Posted 2 months ago
3 - 7 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : As per Accenture Standards Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with Microsoft Azure Data Services and collaborating with cross-functional teams to deliver impactful solutions. Roles & Responsibilities: Lead the design, development, and deployment of applications using Microsoft Azure Data Services. Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure timely delivery of solutions. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Conduct detailed analysis of business requirements, translating them into technical specifications and design documents. Ensure the quality and integrity of the application through rigorous testing and debugging. Professional & Technical Skills: Must To Have Skills:Azure Data Factory (Data Pipeline and Framework implementation) SQL Server (Strong SQL Development) including SQL Stored Procedures ETL/ELT , DWH concepts Azure DevOps AZure Blob, Gen1/Gen2 Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions. This position is based at our Bengaluru office. Qualification As per Accenture Standards
Posted 2 months ago
4 - 8 years
6 - 10 Lacs
Bengaluru
Work from Office
Overall, 4 to 8 years of experience inIT Industry. Min 4 years ofexperience working on Data Engineering using Azure Databricks, Synapse, ADF/Airflow. At least 3 Project experience in Building andmaintaining ETL / ELT pipelines for large data sets, complex data processing,transformations, business logics, cost monitoring & performanceoptimization, and feature engineering processes. Must Have skills: Extensive experience with Azure Databricks(ADB), Delta Lake, Azure Data Lake Storage (ADLS), Azure Data factory (ADF), AzureSQL Database (SQL DB), SQL, ELT / ETL Pipeline Development in Spark basedenvironment. Extensive Experience with SparkCore, PySpark, Python,SparkSQL, Scala, Azure Blob Storage. Experience in Real-Time Data Processing using Apache Kafka/EventHub/IoT, Structured Streaming and Stream analytics. Experience with Apache Airflow for ELTOrchestration. Experience withinfrastructure management, Infrastructure as code (e.g. Terraform) Experience with CI/CD, Version control tools likeGitHub, Azure DevOps. Experience with Version control tools, buildingCI/CD pipelines. Experience with Azure cloud platform. Good to have: Experience / Knowledge on Containerizations - Docker, Kubernetes Experience working in Agile Methodology Qualifications Qualifications - BE, MS, M.Tech or MCAAdditional Information Certifications: Azure Big Data, Databricks CertifiedAssociate
Posted 2 months ago
13 - 18 years
15 - 20 Lacs
Pune
Work from Office
About The Role : We are seeking a highly experienced Storage & Backup Lead with 15+ years of expertise in managing large-scale enterprise storage, backup, and disaster recovery environments. Strategic Leadership & Architecture Define and drive the enterprise storage and backup strategy, aligning with business objectives and industry best practices. Design, implement, and manage highly available, scalable, and secure storage and backup infrastructures across hybrid and multi-cloud environments. Lead technology modernization initiatives, including cloud-based storage solutions, software-defined storage, and advanced data protection methodologies. Develop and enforce data governance, security, and compliance policies for storage and backup systems. Evaluate emerging storage and backup technologies to drive continuous improvement and innovation. Technical Oversight & Operations Management Serve as the deep SME and escalation point for enterprise storage, backup, and disaster recovery issues. Oversee end-to-end backup, replication, and recovery processes, ensuring compliance with RPOs and RTOs. Manage and optimize performance, capacity planning, and lifecycle management for enterprise storage infrastructure. Troubleshoot and resolve complex technical issues, working closely with cross-functional teams, vendors, and MSPs. Lead incident response, disaster recovery planning, and business continuity testing. Establish monitoring and automation strategies for improved efficiency and operational excellence. Team & Vendor Management Drive cost optimization and budgeting for storage and backup solutions. 15+ years of hands-on experience in enterprise storage, backup, and disaster recovery technologies. Primary Skills Enterprise Storage:Dell EMC (VMAX, VNX, PowerMax), IBM (DS8000, FlashSystem), NetApp, Pure Storage. Backup & Disaster Recovery:Dell EMC Networker, Data Domain, Veeam, Azure Backup, Rubrik, Cohesity. SAN & NAS Architectures:Expertise in Fibre Channel SAN switches (Brocade, Cisco), iSCSI, multipathing, zoning. Cloud & Hybrid Storage:Strong knowledge of AWS S3, Azure Blob, Google Cloud Storage, and cloud-based backup strategies. Data Protection & Compliance:Experience in implementing data encryption, retention policies, and regulatory compliance (GDPR, HIPAA, ISO 27001). Automation & Scripting:Proficiency in PowerShell, Python, Ansible for infrastructure automation and backup orchestration.
Posted 3 months ago
4 - 6 years
6 - 8 Lacs
Bengaluru
Work from Office
About The Role : Storage & Backup Management: Configure, manage, and troubleshoot enterprise SAN, NAS, and Object Storage solutions. Perform data backup and recovery operations using enterprise backup solutions (e.g., Veritas NetBackup, Commvault, Veeam, IBM Spectrum Protect). Implement and monitor backup schedules, policies, and retention plans to ensure data protection and disaster recovery. Optimize storage utilization and performance through capacity planning and monitoring tools. Conduct firmware and software upgrades for storage and backup solutions. Cloud (AWS/Azure) Integration: Deploy and manage cloud-based storage solutions (Amazon S3, Azure Blob Storage, AWS EBS, Azure Files). Implement backup and disaster recovery strategies in cloud environments. Work with cloud-native backup services (AWS Backup, Azure Backup) for hybrid infrastructure. Assist in cloud migration projects related to storage and data management. Primary Skills Storage Administration:NetApp, Dell EMC, HPE 3PAR, IBM Storage, Pure Storage. Backup Solutions:Veritas NetBackup, Commvault, Veeam, IBM Spectrum Protect, Rubrik. Protocols:NFS, CIFS, iSCSI, FC, RAID, Object Storage. Scripting & Automation:PowerShell, Python, Bash (Preferred). Secondary Skills Cloud Storage:AWS S3, Azure Blob, AWS EBS, Azure Files. Cloud Backup:AWS Backup, Azure Backup, CloudEndure, Druva. Infrastructure as Code (IaC):Terraform, CloudFormation (Preferred).
Posted 3 months ago
6 - 9 years
8 - 11 Lacs
Bengaluru
Work from Office
About The Role : Key Responsibilities: Storage & Backup Management: Configure, manage, and troubleshoot enterprise SAN, NAS, and Object Storage solutions. Perform data backup and recovery operations using enterprise backup solutions (e.g., Veritas NetBackup, Commvault, Veeam, IBM Spectrum Protect). Implement and monitor backup schedules, policies, and retention plans to ensure data protection and disaster recovery. Optimize storage utilization and performance through capacity planning and monitoring tools. Conduct firmware and software upgrades for storage and backup solutions. Cloud (AWS/Azure) Integration: Deploy and manage cloud-based storage solutions (Amazon S3, Azure Blob Storage, AWS EBS, Azure Files). Implement backup and disaster recovery strategies in cloud environments. Work with cloud-native backup services (AWS Backup, Azure Backup) for hybrid infrastructure. Assist in cloud migration projects related to storage and data management. Primary Skills Storage Administration :NetApp, Dell EMC, HPE 3PAR, IBM Storage, Pure Storage. Backup Solutions :Veritas NetBackup, Commvault, Veeam, IBM Spectrum Protect, Rubrik. Protocols :NFS, CIFS, iSCSI, FC, RAID, Object Storage. Scripting & Automation :PowerShell, Python, Bash (Preferred). Secondary Skills Cloud Storage : AWS S3, Azure Blob, AWS EBS, Azure Files. Cloud Backup :AWS Backup, Azure Backup, CloudEndure, Druva. I nfrastructure as Code (IaC) : Terraform, CloudFormation (Preferred).
Posted 3 months ago
6 - 9 years
8 - 11 Lacs
Bengaluru
Work from Office
About The Role : Storage Engineer. The Lead-Backup and Storage will be responsible for defining the roadmap,implementation and operations of Backup and Storage. Will review current footprint and implementation and create a roadmap and plan for improvement and changes for on-premises and public cloud. Expertise in large scale architecture and deployment of enterprise backup and storage solutions across multiple data centers;sites; and public cloud. Experience with back up strategy in public cloud such as Azure and AWS. Experience working with managed service providers. Experience in designing storage technology and integrating various cloud storage options. Experience in capacity planning and perform analysis of current landscape of storage workloads and storage pools. 4 years of experience in architecting and implementing enterprise scale backup and storage solutions. Proven experience includes SAN-based block storage; object storage; and file storage. Good understanding of physical tape libraries and experience with security key managers Ability to work across the infrastructure team to create vision and roadmap for various infrastructure teams (VMware; Network; Linux; Windows; Unix; etc.) Ability to author and implement basic scripting and coding to automate technical needs such as to move files; backup; create configuration; or other tasks Good Understanding of Shell/Python and Ansible scripting. Primary Skills Storage Administration:NetApp, Dell EMC, HPE 3PAR, IBM Storage, Pure Storage. Backup Solutions:Veritas NetBackup, Commvault, Veeam, IBM Spectrum Protect, Rubrik. Protocols:NFS, CIFS, iSCSI, FC, RAID, Object Storage. Scripting & Automation:PowerShell, Python, Bash (Preferred). Secondary Skills Cloud Storage:AWS S3, Azure Blob, AWS EBS, Azure Files. Cloud Backup:AWS Backup, Azure Backup, CloudEndure, Druva. Infrastructure as Code (IaC):Terraform, CloudFormation (Preferred).
Posted 3 months ago
3 - 8 years
5 - 15 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Skills: Azure Nice to Have Skills: Python or Java, SQL, ETL, Hadoop, Azure certification Technical & Professional requirements: Experience in Azure stack (Azure Data Lake, Azure Data Factory, Azure Databricks) Good understanding of other Azure services like Azure Data Lake Analytics & U-SQL, Azure SQL DW and Azure Synapse Sql Experience in Building and Designing cloud applications using Microsoft Azure cloud technologies., Strong understanding on Azure Data Services including ADF, ADLS, Blob, Data Bricks, Polybase with a background of Hive, Python, Spark., Strong working knowledge of SQL Server, SQL Azure Database, No SQL, Data Modeling, Azure AD, ADFS, Identity & Access Management., Good exposer on Design Patterns, Enterprise Library, UML, Agile & Waterfall Methodologies., Good to have understanding of .NET 4.5 Framework, Entity Framework, C#, Web Services, RESTful Services, Web Forms, JavaScript, JQuery, AngularJS, HTML5
Posted 3 months ago
3 - 6 years
14 - 18 Lacs
Mysore
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
5 - 6 years
15 - 16 Lacs
Hyderabad
Remote
.Net Core, C#, Clean Architecture, Mediator Pattern, Domain Design, EF Code First, SQL, Azure Blob
Posted 3 months ago
3 - 8 years
5 - 15 Lacs
Chennai
Work from Office
Snowflake Architect is responsible for designing, implementing, and optimizing data solutions using Snowflake Cloud Data Platform. They ensure scalability, security, and high performance in data warehousing, analytics, and cloud data solutions. Role & responsibilities 1. Architecture & Design Design end-to-end data solutions using Snowflake Cloud Data Platform. Define data architecture strategy, ensuring scalability and security. Establish best practices for Snowflake implementation, including data modeling, schema design, and query optimization. Design data lakes, data marts, and enterprise data warehouses (EDW) in Snowflake. 2. Data Engineering & Development Oversee ETL/ELT pipelines using Snowflake Snowpipe, Streams, Tasks, and Stored Procedures. Ensure efficient data ingestion, transformation, and storage using SQL, Python, or Scala. Implement data partitioning, clustering, and performance tuning for optimized query execution. 3. Security & Compliance Implement role-based access control (RBAC) and data governance policies. Ensure encryption, auditing, and data masking for security compliance. Define multi-cloud strategies (AWS, Azure, GCP) for Snowflake deployments. 4. Performance Optimization Optimize query performance and warehouse compute resources to reduce costs. Implement Materialized Views, Query Acceleration, and Caching to improve performance. Monitor Snowflake usage, cost management, and auto-scaling capabilities. 5. Integration & Automation Integrate Snowflake with BI tools (Tableau, Power BI, Looker), data lakes (S3, Azure Blob, GCS). Automate data workflows and pipeline orchestration using Airflow, dbt, or Snowflake Tasks. Implement CI/CD pipelines for data model deployments and schema changes. 6. Stakeholder Collaboration & Leadership Work closely with business analysts, data scientists, and IT teams to define requirements. Act as a technical advisor for Snowflake-related decisions and best practices. Provide mentorship and training to data engineers and analysts on Snowflake architecture. Key Skills Required: Snowflake Data Warehouse (Warehouses, Secure Data Sharing, Multi-Cluster Architecture) SQL, Python, Scala (for data processing and scripting) ETL/ELT & Data Pipelines (Informatica, Talend, dbt, Airflow) Cloud Services (AWS, Azure, GCP integration) Performance Tuning (Query Optimization, Snowflake Caching) Security & Governance (RBAC, PII Data Masking, Compliance) BI Tools Integration (Tableau, Power BI, Looker)
Posted 3 months ago
3 - 6 years
5 - 8 Lacs
Kolkata
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
6 - 7 years
8 - 10 Lacs
Bengaluru
Work from Office
Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills:Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
2 - 5 years
5 - 9 Lacs
Bengaluru
Work from Office
Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills:Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
6 - 7 years
8 - 9 Lacs
Bengaluru
Work from Office
Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills:Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 months ago
3 - 7 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : As per Accenture Standards Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with Microsoft Azure Data Services and collaborating with cross-functional teams to deliver impactful solutions. Roles & Responsibilities: Lead the design, development, and deployment of applications using Microsoft Azure Data Services. Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure timely delivery of solutions. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Conduct detailed analysis of business requirements, translating them into technical specifications and design documents. Ensure the quality and integrity of the application through rigorous testing and debugging. Professional & Technical Skills: Must To Have Skills:Azure Data Factory (Data Pipeline and Framework implementation) SQL Server (Strong SQL Development) including SQL Stored Procedures ETL/ELT , DWH concepts Azure DevOps AZure Blob, Gen1/Gen2 Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions. This position is based at our Bengaluru office. Qualification As per Accenture Standards
Posted 3 months ago
5 - 10 years
22 - 25 Lacs
Hyderabad
Work from Office
Networking Architecture: Design and deploy network architectures on Azure, including hybrid and multi-cloud solutions. Configure Azure networking components such as VNets, VPN gateways, ExpressRoute, and Azure Route Server. Implement network segmentation and micro-segmentation to enhance security and traffic isolation. Deploy load balancers, application gateways, and traffic managers for scalable solutions. Ensure high availability and disaster recovery for networking components. Server Administration: Design and implement virtual machines (VMs) and scale sets in Azure. Configure and manage Windows and Linux servers in Azure. Implement Azure Auto-Scaling and Load Balancing solutions for virtual machines. Oversee server patch management, monitoring, and backup strategies. Storage Solutions Design: Architect and manage Azure storage services, including Azure Blob, Azure Files, and Azure Disks. Design and implement backup and recovery solutions using Azure Backup and Azure Site Recovery. Optimize storage solutions for cost and performance while ensuring security compliance. Configure storage accounts, access tiers, and redundancy options (LRS, GRS, ZRS). Optimization and Troubleshooting: Monitor and optimize Azure resources for performance, scalability, and cost-effectiveness. Troubleshoot complex networking, server, and storage issues in Azure environments. Use tools like Azure Monitor, Network Watcher, and Log Analytics for proactive issue identification
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2