To solve the problems of excessive computation of redundant features, insufficient convolution performance, and low efficiency of parallel combining of parameters in the parallel deep convolutional neural network algorithm in the big data environment, we propose the Winograd-based parallel deep convolutional neural network optimization algorithm (WP-DCNN). First, we design a feature filtering strategy based on cosine similarity and normalized mutual information, which solves the problem of excessive computation of redundant features by eliminating the calculation of redundant features between channels through filtering and fusion. Second, we present a MapReduce-based Parallel Winograd Convolution strategy, which solves the problem of insufficient convolution performance by replacing the traditional convolution with the parallelized Winograd convolution to improve the convolution performance. Finally, we present a load-balancing strategy based on task migration, which solves the problem of low efficiency of parallel combining parameters by dynamically migrating loads to balance the load among the cluster nodes and reduce the average response time of the cluster. Experiments show that the proposed algorithm significantly reduces the training cost of DCNN in big data environments and improves the training efficiency of parallel DCNN.